Models
July 19, 2025

US Federal judge certifies class action against Anthropic over AI training piracy

A U.S. federal judge has approved a class action lawsuit against Anthropic, alleging it used millions of copyrighted books to train Claude, raising major concerns over AI training practices and copyright laws.

A U.S. federal court has certified a class action lawsuit against Anthropic, alleging the unauthorized use of millions of copyrighted books to train its Claude AI models.

The case, dubbed a “Napster-style” piracy lawsuit, could lead to billion-dollar damages and potentially reshape how AI companies approach data sourcing, intellectual property, and fair use. As regulators, authors, and content creators closely watch the proceedings, the outcome may establish legal precedent on whether scraping copyrighted content for model training is lawful.

The lawsuit threatens to slow AI development momentum and push companies toward more transparent and licensed data usage strategies.

#
Anthropic

Read Our Content

See All Blogs
Gen AI

Exploring OpenClaw: The self-hosted AI assistant revolution that is reshaping everything

Deveshi Dabbawala

February 18, 2026
Read more
LLM Models

The comprehensive guide to building production-ready Model Context Protocol systems

Deveshi Dabbawala

February 11, 2026
Read more