ARTICLE
14 April 2023

The Risks And Rewards Of Generative AI

GA
Global Advertising Lawyers Alliance (GALA)

Contributor

With firms representing more than 90 countries, each GALA member has the local expertise and experience in advertising, marketing and promotion law that will help your campaign achieve its objectives, and navigate the legal minefield successfully. GALA is a uniquely sensitive global resource whose members maintain frequent contact with each other to maximize the effectiveness of their collaborative efforts for their shared clients. GALA provides the premier worldwide resource to advertisers and agencies seeking solutions to problems involving the complex legal issues affecting today's marketplace.
From generating sophisticated music, artwork, text and video to having a conversation, passing complex standardized tests and, even, as one hapless New York Times journalist...
United States Intellectual Property

From generating sophisticated music, artwork, text and video to having a conversation, passing complex standardized tests and, even, as one hapless New York Times journalist chronicled in a now-viral article, professing love for humans, "generative AI" is revolutionizing the way that we create and interact with content.

Agencies and advertisers are understandably eager to harness AI's enormous potential to power campaigns and create innovative content. But the astonishing pace at which artificial intelligence technology is evolving poses not only immense creative rewards but also a host of potential legal risks. Users should keep this in mind when considering whether and how to implement generative AI tools.

Generative AI: How Does it Work?

Most generative AI platforms generate content – music, artwork, text and videos – based on text prompts, images or musical notes that users provide (the "input"). For example, a user may request AI-generated content by typing a text command describing the content they would like the AI platform to produce, such as "flower painting, in the style of 1970s pop art." To generate content, AI models are "trained" to understand the relationship between an image (or other creative content) and the words used to describe the requested content. Many generative AI platforms are trained by processing vast quantities of content scraped without permission from various sources across the internet. Stable Diffusion, for example, one of the largest AI platforms, has reportedly scraped and processed billions of images. These deep learning models and the algorithms they use allow generative AI platforms to produce new content, which may include visual, audio, audiovisual, written material and chat answers (the "output") in mere seconds in response to the user input.

Basics First: Read the Terms of Service

As with the use of any third-party platform, it's important to read the terms of service. If your company intends to use AI-generated content in advertising materials, it's critical to first ensure that the platform permits commercial use of its output.

In addition, be aware of the risks of using AI-generated content. Notably, many AI platforms do not provide standard legal protections such as representations, warranties and indemnities. In fact, many AI platforms require users to indemnify the platforms for the users' exploitation of output. As a result, these platforms do not represent that their output will not infringe on others' rights. This means that your company will be using the platform's AI-generated content at its (or its client's) own risk. As a consequence, it is crucial for agencies to be transparent with their clients regarding any plans to utilize generative AI tools in work for those clients.

Copyright Ownership of AI-Generated Materials

Right now, when using a generative AI platform to produce new content, your company does not own the copyright in AI-generated output.

The U.S. Copyright Office has taken the position that AI-generated works generally do not qualify for copyright protection and cannot be registered because there is no "human authorship" involved. That decision is being challenged in federal court. The Copyright Office recently published additional guidance noting that it will consider the degree of human authorship when determining whether an AI-generated work is eligible for copyright protection. As such, if AI-generated work is used as a starting point for subsequent creative contribution, the end result, or elements thereof, may potentially be protected by copyright (although no court has yet opined on this). But for now, AI-generated content on its own, without additional human contribution, is not protectable under U.S. copyright law.

This means two things:

  1. your company may not have legal recourse against third parties who use its AI-generated work without permission; and
  2. if your company is an advertising agency or creative service vendor, it cannot transfer copyright rights in AI-generated output to its clients.

Likewise, AI platforms do not represent and warrant that their output is original and not infringing. Therefore, agencies and advertisers need to be transparent with each other about the use of AI-generated content and discuss how it will be treated under their agreements.

Potential Infringement Arising From AI Input

As noted above, AI platforms rely on vast amounts of training data typically scraped without permission from internet sources. Use of this data potentially exposes both the AI platform and the user to liability for copyright infringement, although potential defenses may be available. Indeed, there are multiple lawsuits filed in the United States against the company behind Stable Diffusion and two other AI platforms. These lawsuits allege, among other things, that artists' images were scraped without permission to "train" the AI platform. This, the lawsuits claim, constitutes a reproduction and use of copyrighted material without permission to create unauthorized derivative works.

Although there is no court decision yet on this issue, an argument could be made that, because these scraped images and data are used as references and do not necessarily result in substantially similar AI-generated output, such intermediate use of the content does not constitute unlawful copying for purposes of copyright infringement, but, rather, is a fair use. In addition, even if the use of such content does not constitute a fair use, it is the AI platform – and not the agency or advertiser – that has made those copies and, therefore, the agency and the advertiser may argue that only the AI platform should be responsible for the potentially infringing use, and that the end user should not be liable.

Potential Infringement Arising From AI Output

Given the "black box" nature of generative AI and that it is often not possible to specifically identify the source of, or inspiration for, a particular output, AI-generated output may infringe the copyright in a pre-existing work. This may be the case if the output is substantially similar to protectable expression in a copyrighted work used to train the AI platform. Although "style," in and of itself, is generally not protectable under copyright law, a prompt that seeks an output in the "style of" a particular artist may well result in a legal claim depending on how similar the output is to the artist's original work. Likewise, the output could raise trademark issues if the AI-generated content depicts or incorporates certain logos or trademarked characters, even if distorted in appearance. To reduce the risk of an infringement claim based on AI output:

  • Ensure that your company does not use text prompts that may be likely to produce an infringing work – such as prompting the AI platform to generate work in the style of a particular artist, writer or musician.
  • Do not upload any uncleared reference materials (such as images or songs from a particular artist) to help guide the AI platform to achieve a desired – but potentially infringing – result.

If the AI platform uses information scraped openly from the internet, there is always a risk that the output may be infringing, regardless of any direction provided to the AI platform.

Right of Publicity and Privacy Risks

In addition to copyright and trademark risks, efforts should be made to ensure that AI-generated output does not infringe an individual's right of publicity or privacy rights. The platforms' "black box" nature makes it possible that an AI-generated image of a person, which is based on training data consisting of photographs of actual individuals, may produce an output that very closely resembles an identifiable individual. To reduce the risk of a right of publicity or right of privacy claim from such individuals:

  • Do not use text prompts that may be likely to create an image, video or sound that looks or sounds like a specific person. For example, do not include celebrity names in text prompts or other prompts intended to generate work that resembles a particular person.
  • To the extent that your company intends to modify images or voices of individuals with whom it has agreements, review the applicable talent agreements to determine whether and to what extent your company may produce and use such modified AI-generated content.
  • As an alternative to using AI-generated images of people, consider populating AI-generated materials with licensed images of actual individuals (such as from stock libraries).

Ethical and Confidentiality Risks

Generative AI platforms and their algorithmic models are only as good as the data on which they are built. Biased and false information found across the internet is present within the training data that many generative AI platforms rely upon. For instance, a prompt seeking an image of a CEO may be more likely to result in outputs depicting images of men, rather than women, due to historical gender bias.

Indeed, the Federal Trade Commission (FTC) has opined that AI use "presents risks, such as the potential for unfair or discriminatory outcomes or the perpetuation of existing socioeconomic disparities." To illustrate this risk, the FTC's guidance cites a study of an algorithm used to target medical interventions to the sickest patients that wound up funneling resources to a healthier, white population, to the detriment of sicker, black patients.

Moreover, generative AI platforms are prone to relying upon misinformation and disinformation that is found throughout the internet sources on which the AI models may have been trained. There is also the well-documented tendency of AI tools to "hallucinate" and produce information that appears sound in logic and reasoning but is entirely fabricated.

Companies using AI for predictive, biometric or diagnostic purposes should consider whether the data model they are using accounts for biases and includes a truly representative data set. Similarly, companies seeking to use AI to synthesize or produce research or arguments should always verify the accuracy and reliability of the output produced by such platforms to guard against the further spread of false information. Companies may also want to consider whether they should engage an independent third party to test and audit any AI tools the company is using to ensure that their use is not producing discriminatory, incorrect or otherwise skewed outputs.

It is also important to be aware that inputs submitted on AI platforms, and the resulting outputs produced, may be fed back into the algorithm to continue improving the platforms' technology. As a result, be mindful of including confidential information or personal data when creating a prompt to generate AI content, as this information (or portions thereof) could be incorporated into output generated for another user.

Looking Toward the Future

Agencies and advertisers must closely monitor these emerging technologies, not only to remain competitive but also to stay on top of navigating the legal risks.

Companies should remain aware of the intellectual property risks and contractual restraints that may impact their use of generative AI outputs and should ensure that strong ethical and data security policies are put in place to insulate against the use of these new technologies.

For More Information

Due to the fluid and near-daily evolution of AI technology, it is advisable to always consult with legal counsel when considering using or incorporating AI into a business tool or other public facing content.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More