- within Technology topic(s)
- in United States
- within Law Department Performance, Accounting and Audit and Law Practice Management topic(s)
Duane Morris Takeaway: In 2025, Artificial Intelligence–AI – continued to influence class action litigation on multiple fronts. First, we saw a growth of class action lawsuits targeting AI, including in the copyright area and employment space, as well as the securities fraud area with claims of "AI washing." Second, we saw an increasing number of courts and lawyers err in their use of AI to generate documents filed on dockets across the country and encountered numerous examples of the ways in which AI is continuing to impact the efficiencies that underlie the litigation process.
DMCAR Editor Jerry Maatman discusses this trend in detail in the video below:

- AI Provided Raw Material For Class Action Lawsuits
AI has been an accelerating force in class action litigation as a source of claims stemming from the development, use, and promotion of AI technologies. In 2025, some of those filed claims or ongoing claims included claims stemming from alleged copyright infringement, algorithmic bias or discrimination, and securities fraud.
On the copyright front, courts issued key decisions, including divergent decisions on whether using copyrighted works to train generative AI models constitutes "fair use" under the Copyright Act. In copyright cases, the plaintiffs typically allege that a developer of a generative AI tool violated copyright laws by using publicly available copyrighted works to train and inform the output of the AI tools. In Tremblay v. OpenAI, Inc., No. 23-CV-3223 (N.D. Cal. June 13, 2024), for instance, the plaintiffs alleged that OpenAI trained its algorithm by "copying massive amounts of text" to enable it to "emit convincingly naturalistic text outputs in response to user prompts." The plaintiffs alleged these outputs included summaries that were so accurate that the algorithm must have collected and retained knowledge of the ingested copyrighted works in order to output similar textual content. The plaintiffs typically invoke the Copyright Act to allege that the defendant willfully made unauthorized copies of thousands of copyrighted works, generating damages up to $150,000 per copyrighted work for willful infringement, and, therefore, to seek billions in damages.
The $1.5 billion settlement reached in Bartz, et al. v. Anthropic is a landmark settlement and a prime example. In that suit, three authors filed a class action lawsuit against Anthropic claiming that Anthropic had downloaded millions of copyrighted books from "shadow libraries" like Library Genesis and Pirate Library Mirror to train its AI systems. In June 2025, Judge William H. Alsup of the Northern District of California denied Anthropic's motion for summary judgment on the issue of fair use in a split-the-baby decision. The record showed that Anthropic downloaded more than seven million books from pirate sites but also bought and scanned millions more. The court held that Anthropic's use of legally acquired books for AI training was protected fair use but that downloading and keeping pirated copies was not, noting that a developer that has obtained copies of books "from a pirate site has infringed already, full stop." In August 2025, Judge Alsup granted the plaintiffs' motion for class certification, sua sponte defining the class to include "all beneficial or legal copyright owners of the exclusive right to reproduce copies of any book" in the datasets that met his criteria. With tens of billions of dollars on the line, the parties promptly reached a settlement for $1.5 billion, the largest settlement of any class action in 2025.
Notably, shortly after Judge Alsup's decision on summary judgment, Judge Vince Chhabria of the U.S. District Court for the Northern District of California reached a different conclusion in Kadrey, et al. v. Meta Platforms, Inc., No. 2023-CV-03417 (N.D. Cal. June 25, 2025). In that case, 13 authors, mostly famous fiction writers, sued Meta for downloading their books from online "shadow libraries" and using the books to train Meta's generative AI models (specifically, its large language models, called Llama). The parties filed cross-motions for partial summary judgment regarding fair use. The court rejected the plaintiffs' argument that "the fact that the AI developer downloaded the books from shadow libraries and did not start with an 'authorized copy' of each book gives them an automatic win." The court held that, because Meta's use of the works was highly transformative, to overcome a fair use defense, the plaintiffs needed to show that the AI model harmed the market for the plaintiffs' works. Because the plaintiffs presented no meaningful evidence of market dilution, the court entered summary judgment for Meta on the fair use defense.
On the employment front, Mobley, et al. v. Workday, Inc., No. 23-CV-770 (N.D. Cal. May 16, 2025), continues to reign as one of the most watched and influential cases. In Mobley, the plaintiff, an African American male over the age of 40, who alleged that he suffers from anxiety and depression, brought suit against Workday claiming that its applicant screening tools discriminated against applicants on the basis of race, age, and disability. The plaintiff claimed that he applied for 80 to 100 jobs, and despite holding a bachelor's degree in finance, among other qualifications, did not get a single job offer. The district court granted the defendant's motion to dismiss on the ground that plaintiff failed to plead sufficient facts regarding the supposed liability of Workday as a software vendor for the hiring decisions of potential employers. In other words, the plaintiff failed to allege that Workday was "procuring" employees for its customers and merely claimed that he applied for jobs with a number of companies that all happened to use Workday.
On February 20, 2024, the plaintiff filed an amended complaint alleging that Workday was an agent of the employers that delegated authority to Workday to make hiring process decisions or, alternatively, that Workday was an employment agency or an indirect employer. Plaintiff claimed, among other things, that, in one instance, he applied for a position at 12:55 a.m. and his application was rejected less than an hour later. Judge Rita F. Lin granted in part and denied in part Workday's motion to dismiss the amended complaint. The court reasoned, among other things that the relevant statutes prohibit discrimination "not just by employers but also by agents of those employers," so an employer cannot "escape liability for discrimination by delegating [] traditional functions, like hiring, to a third party," and an employer's agent can be independently liable when the employer has delegated to the agent "functions [that] are traditionally exercised by the employer." The court noted that, if it reasoned otherwise, and accepted Workday's arguments, then companies could "escape liability for hiring decisions by saying that function has been handed to over to someone else (or here, artificial intelligence)."
The court opined that, given Workday's allegedly "crucial role in deciding which applicants can get their 'foot in the door' for an interview, Workday's tools are engaged in conduct that is at the heart of equal access to employment opportunities." The court also denied Workday's motion to dismiss the plaintiff's disparate impact discrimination claims reasoning that "[t]he zero percent success rate at passing Workday's initial screening" combined with the plaintiff's allegations of bias in Workday's training data and tools plausibly supported an inference that Workday's algorithmic tools disproportionately rejected applicants based on factors other than qualifications, such as a candidate's race, age, or disability. Thereafter, the court conditionally certified a collective action of all individuals aged 40 and over who applied for jobs using Workday's platform and were rejected. In doing so, it authorized plaintiff to send notice of the lawsuit to applications nationwide. This litigation has been closely watched for its novel case theory based on artificial intelligence use in making personnel decisions and, given its success to date, is likely to prompt tag along and copycat litigation.
On the securities front, over the past three years, plaintiffs have filed dozens of lawsuits alleging that various defendants made false or misleading statements related to AI technology or related to AI as a driver of market revenue or demand, including claims that companies overstated their AI capabilities, effectiveness, or revenue generation in a practice known as "AI washing." For instance, on April 17, 2025, the plaintiff Wayne County Employees' Retirement System filed suit against AppLovin Corporation, No. 25-CV-03438 (N.D. Cal.), alleging that, among other things, the company falsely attributed its financial success to its enhanced AXON 2.0 digital ad platform and the use of "cutting edge" AI technologies to match advertisements to mobile games. In the complaint, the plaintiffs claim that the company's revenue instead stemmed from manipulative ad practices, such as forced, silent app installations and that, upon release of short-seller reports disclosing the alleged practices, the company's share price declined more than 12%.
Because investors have shown a willingness to pay a premium for shares of companies that appear positioned to capitalize on the effective use of AI, such statements have had the tendency to boost share prices. When projections fail to materialize, however, and share prices decline, plaintiffs are poised to take advantage.
In another example, plaintiffs filed a securities class action against Apple in the Northern District of California alleging that Apple made misleading and false statements regarding Siri's generative AI features. The plaintiffs allege that Apple, at its annual Worldwide Developers Conference and on earnings calls, made claims that its AI solution called Apple Intelligence would create a more advanced and capable Siri. The plaintiffs allege that Apple continued to maintain that these features would arrive in early 2025 until March 2025 when it admitted that "[i]t's going to take us longer than we thought." The plaintiffs allege that, in the wake of these announcements, Apple's share price dropped almost $47.
In sum, with AI continuing to flourish, the implications of its development, use, and advertisement are providing the raw material for creative plaintiffs' class action lawyers. We should expect to see an upward trend of key decisions and new cases in 2026 and beyond as this burgeoning area of the law continues to expand.
- AI Continued To Impact The Litigation Process
As legal professionals on both sides leverage AI to attempt to increase efficiency and gain a strategic advantage, examples of improper use abound. Rarely a day passes without a headline reporting attorney misconduct. To date, much of the AI misuse has centered on attorneys submitting or courts generating filings and legal briefs with fake citations. So-called "AI hallucinations" can take the form of citations to cases that do not exist or, even worse, the attribution of incorrect "hallucinated" holdings or quotations to existing opinions.
Bar associations have compiled dozens if not hundreds of instances of attorneys misusing generative AI in complaints, legal memoranda, expert reports, and appellate briefs. Perhaps more disturbing, these examples are joined by at least two instances of courts withdrawing decisions due to the incorporation of AI-generated contend.
Such conduct has led to severe sanctions, including fines and suspensions for violation of ethical duties, as well as (presumably) terminations. To date, claims of overbilling for such AI-generated worked product have not been made public, and lawyers continue to reiterate and train that AI is a tool and not a substitute for the application of legal analysis and judgment.
At the same time, AI is becoming an asset in the hands of more cautious connoisseurs who are taking advantage of its efficiencies for projects involving data analytics, document reviews, and form generation. Its use has become transformative in the settlement administration process where it has exposed vulnerabilities in the claims administration process by, for example, generating thousands of entries that dilute legitimate claims, thereby reducing legitimate recoveries.
Similar to classrooms where teachers use AI to detect AI, recipients are responding with their own AI-based tools to detect irregularities.
As the technology continues to evolve, it continues to impact that class action space in particular, which is particularly susceptible to mass-generated claims, demand letters, and form complaints. As a result, we are likely seeing the tip of the iceberg in terms of AI's influence on the class action space.
Disclaimer: This Alert has been prepared and published for informational purposes only and is not offered, nor should be construed, as legal advice. For more information, please see the firm's full disclaimer.
[View Source]