ARTICLE
5 December 2024

Texas AG Enters Into First-of-its-Kind Settlement With Provider Of Generative AI Tools For Healthcare Providers Related To Alleged Misrepresentations Concerning Hallucination Rates

GP
Goodwin Procter LLP

Contributor

At Goodwin, we partner with our clients to practice law with integrity, ingenuity, agility, and ambition. Our 1,600 lawyers across the United States, Europe, and Asia excel at complex transactions, high-stakes litigation and world-class advisory services in the technology, life sciences, real estate, private equity, and financial industries. Our unique combination of deep experience serving both the innovators and investors in a rapidly changing, technology-driven economy sets us apart.
In a recent settlement, the Texas attorney general resolved allegations that Pieces Technologies, Inc. (Pieces), a healthcare generative AI company, misrepresented the hallucination rate of its generative AI product to healthcare providers and ultimately overstated the accuracy and safety of the product's underlying software.
United States Texas Food, Drugs, Healthcare, Life Sciences

In a recent settlement, the Texas attorney general resolved allegations that Pieces Technologies, Inc. (Pieces), a healthcare generative AI company, misrepresented the hallucination rate of its generative AI product to healthcare providers and ultimately overstated the accuracy and safety of the product's underlying software. In the press release concerning the settlement, the Texas attorney general emphasized the state's significant interest in examining the employment of AI in "high-risk" environments, such as healthcare, to safeguard public safety.

Pieces' software summarizes, charts, and drafts clinical notes for doctors and nurses. Allegedly, Pieces marketed its software as having a "critical hallucination rate" and "severe hallucination rate" of less than .001%. The Texas attorney general alleged that these claims were false, misleading, and deceptive and thus violated the 1973 Texas Deceptive Trade Practices-Consumer Protection Act, which, among other things, prohibits the dissemination of a statement that a person knows materially misrepresents the character of a service for the purpose of selling or inducing a person to enter into a contract with regard to the service.

Pieces denied the attorney general's allegations. However, it agreed to implement the following measures:

  • Disclose the definition of any metric, benchmark, or measurement of its generative AI products used in its marketing content and the methodology used to calculate it, or use an independent third-party auditor to assess its services and substantiate any marketing claims concerning its services
  • Avoid making false, misleading, or unsubstantiated claims about its generative AI product features, accuracy, reliability, efficacy, testing methods, monitoring methodologies, metric definitions, or training data; misleading customers or users about the accuracy, functionality, purpose, or features of its products; and do not fail to disclose any financial or similar arrangements with individuals involved in marketing, advertising, endorsements, or promotions
  • Provide all current and future customers documentation that reveals any known or reasonably knowable potential harmful uses of its generative AI products or services, including:
    • Training data and/or models used
    • Intended purpose and user guidance
    • Known limitations or misuses
    • Any other necessary documentation to understand the output, monitor inaccuracies, and prevent misuse

While other states and governmental bodies are creating new laws to regulate generative AI products, such as the Colorado Artificial Intelligence Act, this agreement highlights that state regulators may opt to address AI-related risks under existing consumer protection laws without passing new legislation. At the federal level, the Federal Trade Commission has used similar consumer protection laws to investigate AI companies.

What implications does this settlement have on companies developing generative AI tools intended for healthcare applications, as well as for the healthcare organizations that will eventually adopt them?

To effectively evaluate potential AI software vendors, healthcare organizations considering the implementation of generative AI tools must carefully understand the product's marketing claims and intended uses. Similarly, companies developing generative AI tools need to ensure the accuracy of disclosures that educate customers about the risks, limitations, and appropriate use of their AI offerings and the substantiation of any claims about their technology.

Furthermore, it is advisable for both generative AI companies and healthcare entities using these technologies to engage in ongoing risk assessments and implement risk management strategies specifically tailored to generative AI use. Such measures are crucial not only for risk mitigation and liability prevention but also for ensuring compliance with the evolving regulatory and enforcement landscape related to AI.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

See More Popular Content From

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More