On February 27, 2023, the US Federal Trade Commission (FTC) published new Business Blog guidance from Division of Advertising Practices staff about marketing claims for artificial intelligence products. While prior FTC AI guidance focused on the need to avoid using automated tools that have biased or discriminatory impacts, the latest post emphasizes that AI tools also must "work as advertised." The FTC warns marketers against exploiting the sensibilities of a public that may be primed by common "themes of magic and science" to believe unfounded claims about AI-powered products.

The blog post notes that while "artificial intelligence" is an ambiguous concept with many possible definitions, "one thing is for sure: it's a marketing term" that advertisers must not overuse or abuse. The post makes clear that the FTC does not view "artificial intelligence" as mere advertising puffery and will not hesitate to hold marketers accountable for failing to substantiate – e.g., support with competent and reliable evidence – uses of the term that imply a particular performance benefit, including superiority over competing non-AI products.

The post poses four questions that marketers should ask themselves before making claims about their AI-powered product's performance or efficacy:

  1. "Are you exaggerating what your AI product can do?" The FTC expects AI performance claims to be based on actual scientific support, not "science fiction." The post also cautions that performance claims for AI products would be deceptive if they apply only to certain types of users or under limited conditions (and these limitations are not clearly disclosed).
  2. "Are you promising that your AI product does something better than a non-AI product?" FTC staff warn that claims promoting an AI-enabled product as better than a competitive (or predecessor) non-AI product also must be substantiated by actual data – e.g., competent comparative performance testing results. It further cautions that if such testing data is impossible to acquire, no comparative superiority claim should be made.
  3. "Are you aware of the risks?" Companies must identify the reasonably foreseeable risks and impacts of an AI product before releasing it to the market. Marketers should perform sufficient risk evaluations to determine that their AI product will perform as described and will not produce harmfully biased results. This requirement applies equally to directly developed products and to products developed for a marketer by a third party. FTC staff caution that marketers of AI technology developed by others cannot just blame third-party developers by arguing that they should not be held responsible because the technology is a "black box" that they did not understand or know how to test.
  4. "Does the product actually use AI at all?" The FTC staff author cautions against making "baseless claims that your product is AI-enabled" or "AI-powered." Discerning what substantiation the FTC will consider sufficient for such claims is challenging, given the FTC's acknowledgment that the meaning of "artificial intelligence" is ambiguous. But the post emphasizes that "merely using an AI tool in the development process is not the same as a product having AI in it." It also suggests that products may reasonably be classified as AI products if their core functions or features "use computation to perform tasks such as predictions, decisions, or recommendations."

AI product marketers also should keep in mind the FTC's prior warnings that AI technology must be used without introducing bias or other unfair outcomes. AI should be trained on nondiscriminatory data sets – i.e., sets that do not have gaps in data for particular populations – and not produce biased results that discriminate on the basis of race, gender or membership in another protected class.

FTC investigations and enforcement actions often follow in the wake of new staff guidance. Accordingly, AI marketers would be well-advised to carefully evaluate the claims they are making to ensure they are not exaggerating what their algorithms can do. Marketers also should confirm that their AI has been designed and tested to not inadvertently produce discriminatory outcomes. The below members of Cooley's FTC team can help.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.