ARTICLE
21 May 2025

Recent Lawsuits Against AI Companies: Beyond Copyright Infringement (Video)

Traverse Legal

Contributor

In 2004, Traverse Legal was a start-up. We created a brand new business model for the law that is now used by some of the biggest law firms in the country. We invented and incorporated technology into our process and client relations, which are still innovative and unique. We have represented clients of all types in connection with technology, internet law, intellectual property, and business matters. We can help you.

As a niche law firm with controlled overhead and specialized practice areas, we can provide more cost-effective, knowledgeable, and strategic representation than the large law firms we go up against every day. Our clients are based in over 25 different countries around the globe. There is a reason why some of the largest and most successful companies in the world select Traverse Legal to handle matters within our areas of experience.

The pace of AI innovation is staggering. But with that growth comes a rising tide of litigation, often extending far beyond the usual copyright disputes. Today, AI companies face legal challenges in areas...
United States Texas Technology

The pace of AI innovation is staggering. But with that growth comes a rising tide of litigation, often extending far beyond the usual copyright disputes. Today, AI companies face legal challenges in areas that strike at the core of how their systems collect data, make decisions, and impact people's lives.

From biometric privacy violations to algorithmic discrimination, recent lawsuits are reshaping what "AI litigation" really means. Legal counsel, founders, investors, and compliance teams can no longer afford to think of copyright as the only battlefield. A closer look at recent non-copyright AI lawsuits reveals the legal trends reshaping how companies build and deploy intelligent systems.

Privacy and Biometric Data Collection Lawsuits

AI companies using facial recognition, voiceprints, or other biometric data are increasingly ending up in court, and the stakes are only getting higher.

Clearview AI's $50 Million Settlement

In March 2025, a controversial facial recognition startup, Clearview AI, agreed to a $50 million settlement in a biometric privacy class action. The plaintiffs claimed the company scraped billions of facial images from the internet; LinkedIn, Facebook, and YouTube, and sold the data to law enforcement agencies without user consent.

The lawsuit was based on the Illinois Biometric Information Privacy Act (BIPA), one of the strictest biometric privacy laws in the U.S. Unlike many tech laws, BIPA includes a private right of action, meaning individuals can sue directly if their biometric data is collected unlawfully.

What made the Clearview settlement unique wasn't just the amount, but the structure. Rather than paying cash upfront, Clearview gave plaintiffs a share of its future value. That structure raised eyebrows, but it also signaled a deeper shift: courts and claimants are starting to treat biometric data as an asset with long-term value.

Meta's $1.4 Billion Payout in Texas

Meta (formerly Facebook) faced a similar biometric storm. In July 2024, the company settled a lawsuit with the state of Texas for a staggering $1.4 billion, indicating the largest privacy-related payout ever obtained by a single state.

The case stemmed from Facebook's "Tag Suggestions" feature, which automatically analyzed user photos and identified people without their explicit consent. The lawsuit argued this violated Texas's Capture or Use of Biometric Identifier Act, which has been in place since 2009.

What made this case different? It wasn't a class action. Texas brought the suit directly, positioning itself as an aggressive enforcer of biometric privacy rights. The size of the settlement sent a clear message to other AI and tech companies: failing to get informed consent for biometric data can cost billions.

Google Agrees to $1.4 Billion Privacy Settlement with Texas

In May 2025, Google reached a massive $1.4 billion settlement with the State of Texas, resolving two lawsuits over alleged data privacy violations. While the dollar figure mirrors Meta's earlier payout, the underlying claims were distinct.

The lawsuits accused Google of mishandling user data in ways that violated Texas privacy laws. Specifically, it involved how the company collected, stored, and used biometric and location data without proper consent. Texas Attorney General Ken Paxton argued that Google's data practices failed to give users meaningful control or transparency, especially regarding sensitive identifiers.

This settlement signals a continued trend: states are stepping in aggressively where federal regulation lags. For AI and tech companies, that means privacy compliance is no longer just a federal concern; it's also a state-by-state battleground, with billion-dollar consequences.

Amazon Faces Biometric Consent Lawsuit in Illinois

A recent class-action lawsuit filed in Illinois claims Amazon violated the state's Biometric Information Privacy Act (BIPA) by collecting and analyzing users' facial data through Amazon Photos without proper consent.

At the center of the case is Rekognition, Amazon's facial recognition software. Plaintiffs allege that Rekognition scans faces from personal photos uploaded to Amazon Photos, uses that data to refine its algorithm, and then licenses the improved tech to third parties, including law enforcement.

If the court sides with plaintiffs, it could have ripple effects across AI platforms that rely on user-generated content to train or improve their models. Once again, BIPA is proving to be a powerful legal tool, especially in cases involving biometric identifiers.

Voice Data Under Fire: Delgado v. Meta

Facial recognition isn't the only biometric category drawing scrutiny. In Delgado v. Meta Platforms, Inc., plaintiffs argue that Meta collected and stored users' voiceprints via Facebook and Messenger, again without the consent required by Illinois law.

The case, still pending as of mid-2025, recently reached a critical stage when a federal judge allowed Meta to pursue summary judgment. That means Meta is arguing there's not enough evidence to take the case to trial. Regardless of the outcome, the case signals a growing legal interest in how companies handle voice data, an often-overlooked form of biometric information.

For AI companies developing tools that rely on natural language input or voice interfaces, the message is clear: biometric privacy laws extend beyond faces.

LinkedIn Accused of Misusing Private Messages for AI Training

In a separate case filed in California, LinkedIn is facing a class-action lawsuit for allegedly harvesting private messages to train AI models without user consent.

The plaintiffs claim that LinkedIn shared this data with third parties and attempted to cover its tracks by quietly updating its privacy policy in 2024. If the allegations are proven true, this lawsuit could expand the conversation around AI legal risks from biometric identifiers to personal communications more broadly.

AI companies using real-world data for model training should take note: even internal data sources, like messages or voice memos, could trigger major legal exposure if collected improperly.

AI Bias and Discrimination: A Growing Front in AI Litigation

As AI systems become embedded in decision-making especially in hiring, insurance, and public safety, and their flaws are becoming legal liabilities. Recent non-copyright AI lawsuits show that when algorithms treat people differently based on race, disability, or gender, the consequences go far beyond PR fallout.

Intuit and HireVue Accused of Biased Hiring Tech

In March 2025, civil rights groups including the ACLU and Public Justice filed a complaint against Intuit and its AI hiring vendor, HireVue. The case centers on D.K., an Indigenous and Deaf woman who applied for a promotion and was screened using HireVue's automated speech recognition and assessment system.

According to the complaint, the system penalized D.K. due to her speech patterns and lack of typical vocal cues; biases the AI was never trained to handle. The legal argument points to multiple violations: the Americans with Disabilities Act, Title VII of the Civil Rights Act, and Colorado's Anti-Discrimination Act.

This case exemplifies a core AI legal risk: when algorithms replicate or amplify real-world bias, companies may be liable under long-standing anti-discrimination laws, even if the bias wasn't intentional.

State Farm Faces Racial Discrimination Claim Over AI

In Illinois, two Black homeowners filed a lawsuit against State Farm, alleging the company's AI-driven insurance claim process discriminated against them. They claim their storm damage claims were delayed, scrutinized more heavily, and required extra documentation, unlike their White neighbors with similar claims.

Filed in 2022 and still ongoing, the lawsuit cleared a motion to dismiss and could expand into a class-action involving thousands of claimants across the Midwest.

If your model disproportionately affects a protected group, even indirectly, this can lead to AI discrimination lawsuits, and potentially massive liability.

AI Misuse and Malfunction: From Health Denials to Self-Driving Deaths

Some AI systems don't just discriminate; they fail outright. And when they do, people can get hurt or die. These cases are pushing courts to rethink how responsibility works when AI tools make critical decisions.

Health Insurers Sued Over AI-Driven Denials

In early 2025, major health insurers Cigna, Humana, and UnitedHealth Group were hit with lawsuits for allegedly using AI to wrongfully deny medical claims.

One filing cites Cigna's internal process, where an algorithm reviewed and rejected over 300,000 claims in just two months. The average time spent per claim? 1.2 seconds.

The lawsuits argue such rapid-fire denials defy basic due diligence, and in some cases, patients were discharged too early and later died. A related case against UnitedHealth, involving its "nH Predict" algorithm, is now moving forward in federal court.

These cases raise urgent questions about AI healthcare claim denials, especially when algorithms replace human judgment in life-or-death situations.

Tesla's Ongoing Autopilot Litigation

Tesla remains under fire for how it markets and deploys its Autopilot feature. In December 2024, the family of Genesis Mendoza-Martinez filed a wrongful death lawsuit after he died in a 2023 crash involving a Model S in Autopilot mode. The car struck a stationary fire truck.

The lawsuit accuses Tesla of fraudulent misrepresentation, alleging the company exaggerated Autopilot's capabilities and failed to adequately warn users of its limits.

Though Tesla won a ruling to cap damages in a separate case earlier in 2025, these lawsuits reveal how AI system malfunctions, especially in safety-critical areas like driving can evolve into high-stakes product liability claims.

New Frontiers in AI Litigation: Defamation and Prompt Injection

While many lawsuits target how AI systems are trained or how they treat individuals, a new wave of legal claims is testing uncharted ground. These novel cases go beyond traditional categories like privacy or discrimination. They directly challenge how generative AI tools function and how proprietary models can be protected.

OpenAI Sued for AI-Generated Defamation

In a landmark 2023 case, a Georgia radio host filed a defamation lawsuit against OpenAI after ChatGPT generated a false legal complaint accusing him of embezzlement. The fabricated claim apparently invented by the language model during a conversation, named the plaintiff as the subject of a nonexistent lawsuit involving the Second Amendment Foundation.

This was one of the first AI defamation cases to hit the courts. It underscored a critical risk of generative AI: hallucinations, where the system produces convincing but entirely false outputs. For users, these "hallucinations" can damage reputations or spread misinformation. For AI companies, they open the door to lawsuits traditionally reserved for human publishers.

While OpenAI argued that ChatGPT is a tool, not a speaker, the case brings up complex questions: Who is liable when AI spreads false information? Where do Section 230 protections end? And how can companies prevent reputational harm from outputs they didn't manually create?

The answers may define how U.S. defamation law evolves in the age of AI.

AI Trade Secrets Theft Through "Prompt Injection"

Another first-of-its-kind lawsuit involves prompt injection; a technique used to manipulate AI systems into revealing hidden instructions or data. In 2024, OpenEvidence Inc., a medical AI startup, sued competitor Pathway Medical Inc., alleging that the company used fake credentials to access its platform and then deployed prompt injection attacks to extract proprietary system prompts.

At the heart of the case is whether prompt engineering tactics, normally used for testing or research can constitute trade secrets theft under the Defend Trade Secrets Act.

This lawsuit raises a bigger issue: Can the hidden architecture behind an AI tool, such as system prompts, decision trees, or fine-tuned datasets qualify for legal protection? And if so, how can companies safeguard that information in an environment where adversarial prompts can coax it out?

Courts haven't answered these questions yet, but AI companies would be wise to start preparing for them.

AI Lawsuits Are Evolving, And So Must Your Legal Strategy

As artificial intelligence continues to permeate nearly every industry, the legal system is rapidly adapting to meet new challenges. Privacy violations involving biometric data have already triggered billion-dollar settlements under state laws like Illinois' BIPA and Texas's CUBI. At the same time, lawsuits over algorithmic bias and discrimination are testing how civil rights laws apply when machines make the decisions. In high-stakes areas like healthcare and autonomous vehicles, AI misuse is raising hard questions about accountability, safety, and due process.

What's emerging is a broader, deeper category of AI litigation, one that goes beyond how models are trained to how they're used, misused, and weaponized. Claims like defamation from hallucinated outputs or trade secret theft via prompt injection aren't just legal curiosities, but early signs of where regulation and enforcement are heading. The companies that understand this shift now will be far better positioned to adapt, protect their products, and lead with trust.

If your team is building or deploying AI tools, this isn't the time to wait and see. It's time to assess your risk profile, review your data practices, and get serious about compliance before litigation knocks on your door.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More