Anthropic's $1.5 billion copyright settlement

On September 5, 2025, Anthropic agreed to pay at least $1.5 billion to settle a class action copyright lawsuit brought by authors, marking the largest copyright settlement in U.S. history. The settlement resolves allegations that Anthropic illegally downloaded millions of pirated books to train its Claude AI chatbot.

Anthropic's $1.5 billion copyright settlement

A&CT
By AI & CloudSummit Team
|
06 September 2025
| Legal & Compliance

On September 5, 2025, Anthropic agreed to pay at least $1.5 billion to settle a class action copyright lawsuit brought by authors, marking the largest copyright settlement in U.S. history. The settlement, pending court approval, resolves allegations that Anthropic illegally downloaded millions of pirated books to train its Claude AI chatbot. This landmark agreement comes amid a broader wave of copyright litigation against AI companies, with Anthropic becoming the first major AI firm to reach such a substantial settlement.

The settlement stems from the case Bartz v. Anthropic PBC, filed in August 2024 by authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson in the U.S. District Court for the Northern District of California. The authors alleged that Anthropic downloaded over 7 million pirated books from shadow libraries Library Genesis (LibGen) and Pirate Library Mirror (PiLiMi) to train Claude, constituting massive copyright infringement. In June 2025, Judge William Alsup issued a split ruling that proved pivotal: while training AI on copyrighted works constitutes “exceedingly transformative” fair use, downloading and storing pirated materials does not qualify for fair use protection. This distinction created enormous legal exposure for Anthropic, with potential statutory damages reaching hundreds of billions of dollars.

The settlement details reveal unprecedented compensation

Under the settlement terms, Anthropic will pay approximately $3,000 per book for an estimated 500,000 works, with the first $300 million payment due within five business days of court approval. The company must also destroy all pirated datasets it used for training. This per-work compensation significantly exceeds standard statutory minimums of $750-$200 and represents about a 98% discount from the maximum statutory damages of $150,000 per willfully infringed work. The settlement timing appears strategic – announced just three days after Anthropic raised $13 billion at a $183 billion valuation on September 2, 2025, demonstrating the company’s ability to pay while avoiding a potentially catastrophic trial scheduled for December 2025.

Deputy General Counsel Aparna Sridhar stated: “Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims. We remain committed to developing safe AI systems that help people and organizations extend their capabilities.” The court has scheduled a preliminary approval hearing for September 8, 2025, with Judge Alsup presiding.

Beyond the authors’ settlement, Anthropic confronts two other significant lawsuits. The Concord Music Group v. Anthropic case, filed in October 2023, involves major music publishers including Universal Music Publishing Group alleging “systematic and widespread infringement” of copyrighted song lyrics. The publishers seek $75 million in damages plus injunctive relief, citing examples of Claude reproducing lyrics to songs like Katy Perry’s “Roar” and Gloria Gaynor’s “I Will Survive.” Evidence from the authors’ case revealed Anthropic also downloaded sheet music and songbooks via BitTorrent, strengthening the music publishers’ claims.

Reddit filed a separate lawsuit in June 2025 alleging breach of contract, unjust enrichment, and unfair competition for unauthorized scraping of Reddit user posts to train AI models. Reddit seeks to remand the case to state court, while Anthropic claims federal copyright preemption applies. These ongoing cases suggest Anthropic’s legal exposure extends beyond the authors’ settlement.

Anthropic’s settlement occurs within a broader context of 45+ pending copyright lawsuits against AI companies, with no other major settlements reached. OpenAI faces multiple high-profile cases including the Authors Guild lawsuit featuring George R.R. Martin and John Grisham, and the New York Times lawsuit seeking billions in damages. Meta recently won partial summary judgment in the Sarah Silverman case, with courts dismissing derivative work theories while allowing core copyright claims to proceed. The music industry has filed suits against AI music generators Suno and Udio seeking up to $150,000 per song.

Judge Alsup’s ruling in the Anthropic case established a critical legal precedent: AI training on legally acquired copyrighted materials may constitute fair use, but using pirated sources crosses the line into infringement. This distinction pushes AI companies toward licensing agreements rather than unauthorized data acquisition. Congressional pressure has intensified, with a July 2025 Senate hearing where Senator Josh Hawley called AI training “the largest intellectual property theft in American history.”

Settlement signals a shift in AI industry practices

The $1.5 billion settlement represents approximately 0.8% of Anthropic’s valuation but sends a powerful message about the costs of using pirated training data. The agreement suggests a potential industry standard of roughly $3,000 per work for settlements, though this remains far below maximum statutory damages. For context, AI models have been trained on datasets equivalent to 22 Libraries of Congress, raising questions about retroactive liability across the industry.

The settlement’s impact extends beyond financial penalties. Anthropic must destroy its pirated datasets and presumably rebuild training pipelines using legally acquired materials. This requirement, combined with the settlement’s size, may accelerate the shift toward formal licensing arrangements between AI companies and content creators. OpenAI has already signed deals with Associated Press and Axel Springer, while Meta and Google pursue similar partnerships.

Conclusion

The verified $1.5 billion Anthropic settlement confirms that using pirated materials for AI training carries enormous legal and financial risks, even as courts recognize AI training itself as potentially transformative fair use. This landmark agreement – the first and only major AI copyright settlement to date – establishes both a cautionary precedent and a potential framework for resolving the dozens of similar cases pending against other AI companies. While the settlement allows Anthropic to avoid a potentially ruinous trial, it fundamentally validates creators’ claims that AI companies must compensate for or legally acquire the copyrighted works that power their systems. The distinction between using legally obtained versus pirated content has proven to be the critical factor, suggesting that the AI industry’s path forward lies not in avoiding copyright entirely, but in establishing sustainable licensing models that balance innovation with creator compensation.

🚀 Ready to Master AI?

The future of AI is unfolding before our eyes. Join us at the European AI & Cloud Summit to dive deeper into cutting-edge AI technologies and transform your organization’s approach to artificial intelligence.

Join 3,000+ AI engineers, technology leaders, and innovators from across Europe at the premier event where the future of AI integration is shaped.

Secure Your Tickets Now

Early bird pricing available • The sooner you register, the more you save