Love or hate the idea (and many people fall into the latter category), AI language and text to image models have arrived. Now anyone can create prose, programmes, and pictures in mere seconds simply by entering a few instructions on a website. You may be thinking "wonderful, no more dull report and contract writing". However, there are serious concerns around the accuracy of the information ChatGPT is producing. In addition, the lawsuits by artists, engineers, and other creatives against AI language and art model developers are mounting. There are also potential legal issues for users of ChatGPT, such as copyright infringement and defamation.

Before exploring these legal challenges, it is useful to explain what AI language and art models are. For ease of reference, I will refer to the most well-known, ChatGPT, but the basic principles apply to most other chatbots such as Meta's Llama, and Google's Bard.

WHAT IS CHATGPT?

ChatGPT, which stands for "Chat Generative Pre-trained Transformer", was created by Open AI and launched in November 2022. It is considered the most significant technological development since the launch of the Apple iPhone in 2007. It can produce human-like responses to a vast range of questions and is often (but not always) accurate.

ChatGPT works by predicting the next word in a series of words. It is underpinned by an enormous language model, created by Open AI feeding into it some 300 billion words systematically scraped from the internet in the form of books, articles, websites, and blog posts. ChatGPT used the data provided to learn how to predict the next word. Eventually, it became sufficiently trained to produce human-like responses to tasks given to it via the front-end 'Chat'.

Preston Gralla provided a brilliant analogy for how AI language and text to image models operate in a recent article:

"To do its work, AI needs to constantly ingest data, lots of it. Think of it as the monster plant Audrey II in Little Shop of Horrors, constantly crying out "Feed me!"

Open AI and other developers of AI text and image-generating models did not seek permission to use third-party words and art to feed their creations. This fact forms the basis of several class legal actions currently underway around the world.

WHAT ARE THE LEGAL GROUNDS FOR THE LAWSUITS AGAINST CHATGPT?

The basis for legal claims against ChatGPT and other language and image-generating models fall into several categories:

  • Intellectual property infringement – lawsuits have been launched in several countries, including the UK and US, concerning AI developers scraping content from the internet to train their models without asking permission from the original creators. For example, Stability AI, the London-based company behind the text to image model Stable Diffusion, is being taken to court by Getty Images, which is arguing that Stability AI "unlawfully copied and processed millions of images protected by copyright and the associated metadata" to train Stability Diffusion. There is also a class action lawsuit being brought against several text to image generators, including Stable Diffusion, DeviantArt, and Midjourney, by several US-based artists who claim their original work was scraped from the Internet without permission and used to train a text to image tool.
  • Defamation – the mayor of Hepburn Shire, 120km northwest of Melbourne, Australia, at the time of writing was threatening to sue Microsoft for defamation after ChatGPT (which is incorporated into Microsoft's search engine, Bing), falsely named him as a guilty party in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.
  • Breach of open-source code licences – a class action lawsuit has been brought against GitHub, Microsoft, and OpenAI (and others). The Claimants' allege that the Defendant's violated open-source licencing terms and conditions and breached copyright when they used code created by others to build and train Copilot, an AI coding assistant.
  • Privacy and data protection breaches – the scraped data used to train ChatGPT scooped up enormous amounts of personal information without the consent of data subjects. Potentially, this breaches privacy and data protection regulations around the world, including the UK and EU GDPR and the California Privacy Rights Act (CPRA). On 31 March 2023, Italy's data regulator, Garante per la Protezione dei Dati Personali, said it would block ChatGPT and investigate whether the AI model complied with the EU GDPR. The watchdog stated no legal foundation to justify "the mass collection and storage of personal data for the purpose of 'training' the algorithms underlying the operation of the platform" had been provided. It also voiced concern over the fact that ChatGPT does not verify user's ages, and therefore "exposes minors to absolutely unsuitable answers compared to their degree of development and awareness". At the beginning of May, Italy reactivated ChatGPT and said it would carry on its "fact-finding activities regarding OpenAI under the umbrella of the ad-hoc task force that was set up by the European Data Protection Board."

What are the risks for businesses using ChatGPT?

Although ChatGPT and its offshoots may seem like a productivity dream come true, caution must be taken when using it to produce written text and images for business purposes. There may be issues concerning copyright and breach of the GDPR and Data Protection Act 2018. In addition, as demonstrated by the defamation lawsuit brought by the mayor of Hepburn Shire, there may be serious legal consequences for organisations if ChatGPT makes mistakes or demonstrates bias, both of which it can do. To avoid potential claims, businesses and individuals must undertake a risk assessment before utilising ChatGPT for particular projects and establish robust due diligence checks on the accuracy and impartiality of the content it produces.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.