Texas AG Announces 'First-of-Its-Kind' Settlement
In what is likely to be followed by further inquiries into the marketing, use, and deployment of artificial intelligence (AI) tools throughout the country, on September 18, 2024, Texas Attorney General (AG) Ken Paxton reached an agreement with an AI healthcare technology company to resolve allegations that the company made deceptive statements regarding the accuracy of its generative AI (GenAI) product used to summarize patient healthcare data.
In this case, major hospitals provided healthcare data to the company so that hospital staff could obtain GenAI outputs consisting of summaries of patient conditions and treatments. The company touted the accuracy of its GenAI product in its marketing, claiming a "severe hallucination rate" of "<0.001%" and "<1 per 100,000." Hallucinations are outputs from GenAI products that may initially appear to be believable but are in fact inaccurate or fabricated. An investigation by the Texas AG's office found that the company made deceptive claims about the accuracy of the outputs from its product and "may have deceived hospitals about the accuracy and safety" of the product as a result.
In connection with the settlement, the company agreed that any marketing of "metrics, benchmarks, or similar measurements describing the outputs of its generative AI products" must "clearly and conspicuously disclose" the "meaning or definition" of the metric, benchmark, or similar measurement, as well as the "method, procedure, or any other process" used in its calculation.
FTC Warns About Discriminatory Impacts of AI
This agreement out of Texas follows warnings by the Federal Trade Commission (FTC) that market participants may violate the FTC Act by using AI that has discriminatory impacts, by making misleading or unsubstantiated claims about AI, or by deploying AI without risk mitigation. The FTC also issued a report evaluating the use and impact of AI in combating online harms that addresses concerns about AI inaccuracy, bias, and discriminatory design. The FTC has required companies to destroy algorithms and other products that were trained on data that should not have been collected.
What Businesses Should Be Doing Now
Businesses should evaluate their internal AI governance and compliance programs with the assistance of counsel, considering emerging laws, regulations, guidance, and potential reputational impacts. In particular, businesses should be careful about claims made about the accuracy, safety, or impact of any AI tools that they are marketing, procuring, deploying, or using.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.