ARTICLE
2 October 2025

Anthropic's $1.5B Settlement: A Landmark In The Evolving Copyright Terrain

P
Patterson

Contributor

Patterson Intellectual Property Law, P.C. is a full service intellectual property law firm handling patent, trademark, copyright, trade secret, and domain related matters for its clients. Patterson Intellectual Property Law, P.C. was formed by a group of Registered Patent Attorneys to be the first law firm in Middle Tennessee to practice exclusively in Intellectual Property Law. Since its beginning in March, 1992, and in response to the needs of its clients, the Firm has more than tripled in size. In addition, Patterson Intellectual Property Law, P.C. has continued to invest heavily in law office technology to maintain an ability to provide sophisticated and high quality services in an efficient and cost-effective manner.

On September 5, 2025, AI giant Anthropic agreed to the largest payout in U.S. copyright law history—a minimum of $1.5 billion dollars—for using pirated books to train its AI model.
United States Intellectual Property

On September 5, 2025, AI giant Anthropic agreed to the largest payout in U.S. copyright law history—a minimum of $1.5 billion dollars—for using pirated books to train its AI model. On September 5, 2025, a federal judge granted preliminary approval of this agreement. Beyond the monumental amount, there are also several legally significant lessons from the agreement.

Anthropic's Historic Settlement

Anthropic proposed a settlement agreement to avoid statutory damages but that would still give each author $3,000, and, after initial rejection, this agreement was granted preliminary approval. The landmark settlement stems from the lawsuit, Bartz v. Anthropic PBC, No.C24-05417WHA (N.D. Cal. filed Aug. 19, 2024), wherein authors of fiction and nonfiction works accused Anthropic of infringing their copyrighted materials. This lawsuit is just one of the many copyright infringement suits brought by authors of different types of works against a major player in AI development. Authors are challenging both the use of copyrighted materials to train AI models as well as the production of outputs, like the writings at issue in Anthropic, that arguably infringe their creative works. In some ways, authors are seeking to enforce the scope of their intellectual property protections, but, in other ways, authors are turning to courts to balance the value of human creative output in an economic landscape being increasingly shaped by AI.

In Anthropic, Plaintiffs argued against the AI company training its large language models (LLMs) on Plaintiffs' works, while especially taking issue with the company feeding its LLMs from "shadow libraries," i.e., pirated repositories of digital texts including the Plaintiffs' copyrighted works. Anthropic's principal defense was fair use—a carve-out in copyright protection that permits limited use of copyrighted material without the owner's permission for purposes such as criticism, comment, news reporting, teaching, scholarship, or research.

On June 23, 2025, Judge Alsup of the Northern District of California determined the fair use doctrine allowed Anthropic to train its LLMs with copyrighted materials but not the pirated copies of those materials, finding the pirated works "inherently, irredeemably infringing" of authors' copyrights.1

The June ruling kicked off months of discussion culminating in a settlement poised as the largest publicly reported copyright recovery in history; however, that settlement was initially rejected by the judge out of concern around the claim process for class members. The key terms of the settlement provide:

1. Anthropic will pay a minimum of $1.5 billion dollars, shaking out to approximately $3,000 per class work.

2. Anthropic will destroy their copies of the works acquired from the shadow libraries.

3. Anthropic is only released from past training behavior, not suits involving outputs from its LLMs.

At the preliminary hearing on the settlement, Judge Alsup denied the proposal without prejudice, saying he felt "misled" and wanting more assurances about the claim process for class members.

After the authors submitted supplemental information regarding the claim process, the judge then granted preliminary approval of the settlement. The process entails a plan for works with more than one claimant to have a 50/50 split of funds between publishers and author, and that would be nonmandatory. Claimants may elect to split funds a different way if they have otherwise agreed to different contract terms with a publisher.

Takeaways from the Settlement Deal

Training LLMs on pirated works comes with a heavy cost at a time when AI's place in the ambit of copyright law is not yet clear. While the $3,000 figure per class work appears to be a nice payday for the authors, statutory damages in this case could have exceeded $15 billion or even have been upwards of $75 billion in the case of willful infringement. 17 U.S.C. § 504(c) (up to $30,000 per work infringed and $150,000 per work for willful infringement).

For AI companies, this development underscores the care AI companies need to take when selecting and cultivating the datasets they use to train LLMs. Using data from untrustworthy sources—or even illegal sources—could have steep consequences. Such risks could lead companies to more seriously consider purchasing entire libraries to generate training data from copyrighted works without obtaining explicit permission for use from the copyright owners. For smaller startups, this development may also signal higher entry costs into the market.

For authors, this case models the value of registering works to protect against use in AI training. As protections for creators emerge, creators may also be afforded more compensation and control in the post-Anthropic settlement environment, particularly if companies choose to license training data rather than using potentially risky datasets.

Even still, the settlement is just one piece of the broader puzzle for how AI's use of copyrighted material will be handled by courts. This deal is just one of dozens of copyright lawsuits against AI giants and sets a potential benchmark for companies to resolve similar claims. Stay tuned for further developments.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More