Within only a few days of launching, OpenAI's ChatGPT had millions of users in awe. Using a simple question-and-answer chat structure, a user can ask the ChatGPT bot a question on any topic, or request it to fulfil a range of demands, including writing poems, essays and songs, or even suggesting dinner recipes based on a set list of ingredients. ChatGPT's success has the potential to upend the internet search space, which has sounded off alarm bells at Google. Google quickly responded with its own AI chatbot, Google Bard. Ironically, it was a team of Google engineers who first proposed the transformer algorithm in a 2017 paper, setting off the current generative AI craze.
Generative AI is more sophisticated than a conventional search engine. It can be used for various forms of creative expression and content creation. Since the launch of OpenAI's GPT-3 (similar to GPT but better suited for a broader range of general-purpose tasks), the number of generative AI systems spiked, from open source ones (such as BLOOM) to proprietary models (such as Canada's Cohere) that can create art, music, literature, and code, often without any indication that a non-human actor was involved. Since 2020, venture capitalists (VCs) have increased their investments in generative AI companies by 425% to US$2.1-billion. By some estimates, the market potential for generative AI will reach US$111-billion by 2030.
The breadth of available generative AI tools will only grow and get more specialized. Some generative AI tools already claim to create better marketing and sales emails (Lavender), draft legal documents (Harvey), edit videos (Runway), help design apps and websites (Diagram) and create custom voices for many commercial uses such as ads, call centres and audiobooks (Resemble.AI).
The legal ramifications of generative AI — such as what constitutes the reproduction of a copyrighted work and how to track it — are even more relevant now.
Generative AI tools are particularly data hungry. They are invariably trained by some data that is protected by copyright. In turn, their algorithm produces new, original works, which may itself be protected by copyright if a human author provides sufficient skill and judgment in the works created by the algorithm. Through this process, AI systems are scrutinized by authors and rights holders to see if their copyrights are infringed through use as training materials for the AI.
Under the Canadian Copyright Act, an author's work is generally protected by copyright from the moment that it is created and fixed in a form (e.g., in writing), without the need for any registration or legal action. Generative AI systems use such existing works as input data to enable the AI to create new works. The legal challenge is that often the data that AI systems rely upon to create new content is protected by copyright, which means that the copyright owner's rights are necessarily infringed because they are copied by the act of teaching the AI software.
One could argue that the generative AI system is the original author, though copyright protects human creation only. The algorithm draws on previous works to create new works, in the same way that an independent human artist does — their original works are informed by artists before them, but their works are nonetheless original. The main difference between human artists and generative AI "artists" is that humans do not need to make copies of the prior works to be inspired. Prior works may be merely perceived by the senses. In contrast, generative AI tools technically copy the prior works by encoding them — that is the machine's "inspiration".
This month, the U.S. Copyright Office considered copyright of AI-produced artistic works, which may have consequences for the legal landscape of AI copyright more broadly. An author of a comic book applied to the U.S. Copyright Office for copyright over their comic book and was originally granted copyright registration for the work. However, once the U.S. Copyright Office discovered that the images in the comic book were created using the AI technology, Midjourney, the copyright registration was revised. The author was recognized as a copyright holder of the text, and the selection, coordination and arrangement of the comic book's written and visual elements, but not the AI-generated images in the comic book. The U.S. Copyright Office held that the images by Midjourney could not be copyrighted, as these were not the products of human authorship.
AI has also been analyzed through the doctrines of fair use (in the United States) and fair dealing (in Canada), which create an exemption for certain unauthorized uses of copyrighted materials. Generally, if an unauthorized use of copyrighted material is found to constitute a fair use or fair dealing, it will not be considered an infringement of copyright. When deciding whether fair use covers an unauthorized usage of a copyrighted work, courts are guided by a non-exhaustive list of considerations. Courts apply the fair use doctrine in a holistic manner, and on a case-by-case basis. Although fair use is yet to be applied to generative AI, it has been considered in several other contexts involving emerging digital technologies, including Google's digital copying of physical books — the text of which were later made fully searchable on Google's engine. Legal commentators are eagerly awaiting the Supreme Court of the United States' decision in Andy Warhol Foundation for the Visual Arts, Inc. v. Goldsmith, which will address the scope of what constitutes a transformative, and thus fair use, versus what is considered a derivative, and therefore, infringing use.
We can expect whether the fair use doctrine applies to generative AI to be influenced by a few factors including the location of the businesses, the location of the training data used (as other countries may not have an equivalent fair use doctrine) and whether the generative AI platform in question transforms the original work. Potential infringers might argue that the processes used to scrape data to train generative AI platforms constitutes research under the fair use and fair dealing provisions.
Copyright infringement cases related to the use of generative AI are evolving rapidly. In November 2022, plaintiffs proposed a class action lawsuit for software piracy in California, against Microsoft, its subsidiary, GitHub, and OpenAI. The class action alleges that the companies illegally used others' copyrighted materials to build and train their Copilot service, which uses AI to write software. Most recently, in January 2023, Getty Images sued the AI art tool, Stable Diffusion, in London, U.K., alleging that Stability AI unlawfully copied and processed millions of images protected by copyright to train its software program. A similar lawsuit was filed in the Delaware U.S. District Court in February 2023. We can expect similar lawsuits will continue to arise globally.
Beyond copyright matters, AI poses questions for the broad intellectual property landscape. Following a decision of the U.K. Intellectual Property Office, the High Court, and later, the Court of Appeal of the United Kingdom considered if AI-based machines can be patentable inventions. The appellant had created an AI machine, "DABUS", which then created two additional inventions. The appellant sought to patent the two inventions, naming DABUS as the inventor. In a split judgment, the Court of Appeal held that the law does not recognize an AI machine as an inventor, and that the law should be applied as it presently stands. The case has now been appealed further to the U.K. Supreme Court and a judgment is expected to be released in 2023.
None of this is to say that founders should be deterred, just careful. Startups should work with IP counsel early in the process to discuss ways to understand and mitigate legal risks.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.