ARTICLE
18 July 2025

State AGs, And Mark Twain, Hold Companies Accountable For AI Use

KD
Kelley Drye & Warren LLP

Contributor

Kelley Drye & Warren LLP is an AmLaw 200, Chambers ranked, full-service law firm of more than 350 attorneys and other professionals. For more than 180 years, Kelley Drye has provided legal counsel carefully connected to our client’s business strategies and has measured success by the real value we create.
This week, two state attorneys general on very different ends of the political spectrum announced separate actions related to purported discrimination by AI services.
United States Technology

This week, two state attorneys general on very different ends of the political spectrum announced separate actions related to purported discrimination by AI services. Missouri sent inquiry letters to four Big Tech companies pertaining to alleged misrepresentations by their AI chatbots, and Massachusetts settled with a student loan company resolving allegations of, among other things, use of AI models that could have disparate impacts on certain communities.

Missouri Big Tech Inquiries

Missouri Attorney General Andrew Bailey issued formal demand letters on July 9 to four Big Tech companies for information on whether their AI chatbots "were trained to distort historical facts and produce biased results while advertising themselves to be neutral."

The Attorney General's office stated they decided to send these letters after the AI chat programs provided "deeply misleading" answers to a question ranking the last five presidents "specifically regarding antisemitism." The office cited its recent actions regarding "politically motivated censorship," including a 2022 federal lawsuit against the Biden administration alleging censorship by social media companies.

The letters themselves, signed by the Acting Chief of Staff, begin by quoting Mark Twain: "Get your facts first, then you can distort them as you please." The letters note that out of the 6 chatbots asked the ranking question, several ranked President Trump last and one refused to answer. The letter asserts that some companies, while moving away from the use of factcheckers, have instead employed "Factcheck 2.0" through AI. The office is concerned that the company is making misrepresentations in violation of the Missouri Merchandising Practices Act (Missouri's UDAP statute). It further hints that by having results "that appear to disregard objective historical facts in favor of a particular narrative," it might take a company out of Section 230 immunity.

The AG office ends with requests for voluntary responses to four questions summarized below:

  • Has there ever been a policy or practice designed to disfavor or have a disparate effect based on political affiliation or policy positions?
  • Do you believe the algorithm in practice has a disparate effect on individuals based on political affiliation or policy positions?
  • Provide all documents regarding the design of AI to engage in banning, suppressing, censoring or obscuring particular inputs to produce a deliberately curated response.
  • Provide all documents regarding training or design of the chatbot that resulted in the ranking of President Trump unfavorably in response to questions on antisemitism.

Massachusetts Settles with Earnest Operations

On July 10, Massachusetts Attorney General Andrea Joy Campbell announced she obtained $2.5 million monetary payment in an Assurance of Discontinuance settlement with student loan company Earnest for alleged violations of fair lending and consumer protection laws. According to the office, the company used AI algorithms in determining student loan applicants' eligibility, terms, and pricing. AG Campbell stated that the use of AI models "put historically marginalized student borrowers at risk of being denied loans or receiving unfavorable loan terms – impeding their chances of economic growth and opportunity." She noted this settlement would "put lenders on notice".

The settlement alleged that the company violated its UDAP law by "failing to guard against disparate outcomes" in underwriting, including algorithmic underwriting. More specifically, Massachusetts alleged unfair or deceptive practices including use of a "Cohort Default Rate" in algorithmic underwriting that purportedly resulted in disparate impact to Black and Hispanic applicants, providing "inaccurate and non-specific adverse action notices" and a "Knockout Rule" automatically denying applicants if they did not at least have a green card (both in violation of the Equal Credit Opportunity Act).

Earnest is required to:

  • Implement a corporate governance system that includes testing, controls, and assessments for its AI models and ensure the program complies with applicable guidance and best practices for AI governance and risk management;
  • Create an algorithmic oversight team responsible for those policies and a reporting process for concerns regarding algorithmic bias;
  • Develop policies on development of AI models in the future for compliance with laws, its governance policies, algorithmic bias, and accountability;
  • Specifically modify as needed its algorithmic underwriting and knockout rule models and conduct fair lending testing of those models, create a yearly inventory, document decisions, and maintain account level data; and
  • Prohibit using a Cohort Default Rule and discontinue the knockout rule related to citizenship.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More