The Federal Trade Commission has opened an investigation into whether OpenAI, the company behind ChatGPT, has violated consumer protection laws by "putting personal reputations and data at risk."

Particularly striking in the civil investigative demand ("CID") sent by the Commission is the clear focus on the advertising and marketing claims made by OpenAI. The CID requests, among other documents:

  • All advertisements or public statements relating to the capabilities, accuracy, or validity of OpenAI's products;
  • All representations or disclosures disseminated regarding the limitations or risks associated with OpenAI products;
  • Documents related to research or efforts to assess consumer understanding of the OpenAI products; and
  • Detailed descriptions on how the OpenAI products have been marketed to business customers.

The FTC has previously warned that AI companies are not exempt from existing regulations against deceptive practices, including unsubstantiated claims about a product's ability, efficacy, and accuracy. While the FTC has recently increased its attention on data privacy issues, as they have stated, "false or unsubstantiated claims about a product's efficacy are [their] bread and butter." AI companies should continue to be cognizant of the breadth of issues relating to the technology to address regulator concerns and for reputational purposes, as the public's attention on AI is at an all-time high. But, the "low hanging" regulatory fruit of accurate product representation should not be ignored.

The CID also requested information regarding a variety of additional issues addressing privacy, consumer risk, and bias, including:

  • Disclosures and Representations: How OpenAI retains or uses personal information;
  • Model Development/Training: The data used to develop Large Language Models ("LLMs"), the extent to which OpenAI has assessed or reviewed the content of the training data, the process of retraining an LLM to produce a new version of the model, the policies and procedures used to assess risk and safety before releasing a new LLM product, and how personal information is kept out of training data;
  • Assessing and Addressing Risks: The policies and procedures relating to the LLM's actual or potential generation of statements about individuals, the steps taken to assess the capacity of the LLM to generate statements about real individuals that are false, misleading, or disparaging, and any mitigation strategies to prevent such statements, the extent to which the LLM can generate statements about real individuals that contain personal information, and the extent to which OpenAI has received complaints regarding specific instances in which the LLM has caused any of the harms discussed as "safety challenges" in the GPT-4 System Card;
  • Privacy and Prompt Injection Risks and Mitigation: The data incident self-reported by OpenAI involving a bug that allowed users to see titles from other user's chat histories and payment information, all instances of actual or attempted prompt injection attacks, policies and procedures for monitoring, preventing, and mitigating against prompt injection attacks;
  • API Integrations and Plugins: Policies and procedures for assessing or mitigating actual or potential risks of access to or exposure of personal information, technical or organizational measures third parties must take to assess or mitigate risks in connection with the use of OpenAI API or plugins, policies or procedures relating to API users' ability to modify models or products, including measures taken to ensure that such modifications do not increase the risk of unauthorized access to or exposure of personal information;
  • Monitoring, Collection, Use, and Retention of Personal Information: The type of personal information collected/processed, opt-out mechanisms available for individuals to opt out of the collection and processing of their personal information, mechanisms available for users to request deletion of their personal information, and any processes used by OpenAI to aggregate, anonymize, pseudonymize, or otherwise de-identify personal information

www.fkks.com

This alert provides general coverage of its subject area. We provide it with the understanding that Frankfurt Kurnit Klein & Selz is not engaged herein in rendering legal advice, and shall not be liable for any damages resulting from any error, inaccuracy, or omission. Our attorneys practice law only in jurisdictions in which they are properly authorized to do so. We do not seek to represent clients in other jurisdictions.