Models
July 19, 2025

US Federal judge certifies class action against Anthropic over AI training piracy

A U.S. federal judge has approved a class action lawsuit against Anthropic, alleging it used millions of copyrighted books to train Claude, raising major concerns over AI training practices and copyright laws.

A U.S. federal court has certified a class action lawsuit against Anthropic, alleging the unauthorized use of millions of copyrighted books to train its Claude AI models.

The case, dubbed a “Napster-style” piracy lawsuit, could lead to billion-dollar damages and potentially reshape how AI companies approach data sourcing, intellectual property, and fair use. As regulators, authors, and content creators closely watch the proceedings, the outcome may establish legal precedent on whether scraping copyrighted content for model training is lawful.

The lawsuit threatens to slow AI development momentum and push companies toward more transparent and licensed data usage strategies.

#
Anthropic

Read Our Content

See All Blogs
AI safety

Decoding White House Executive Order on “Winning the AI Race: America’s AI Action Plan” for Organizations planning to adopt Gen AI

Rishabh Sood

September 24, 2025
Read more
AWS

AWS AI offerings powering enterprise AI in 2025

Siddharth Menon

September 22, 2025
Read more