Models
July 19, 2025

US Federal judge certifies class action against Anthropic over AI training piracy

A U.S. federal judge has approved a class action lawsuit against Anthropic, alleging it used millions of copyrighted books to train Claude, raising major concerns over AI training practices and copyright laws.

A U.S. federal court has certified a class action lawsuit against Anthropic, alleging the unauthorized use of millions of copyrighted books to train its Claude AI models.

The case, dubbed a “Napster-style” piracy lawsuit, could lead to billion-dollar damages and potentially reshape how AI companies approach data sourcing, intellectual property, and fair use. As regulators, authors, and content creators closely watch the proceedings, the outcome may establish legal precedent on whether scraping copyrighted content for model training is lawful.

The lawsuit threatens to slow AI development momentum and push companies toward more transparent and licensed data usage strategies.

#
Anthropic

Read Our Content

See All Blogs
AWS

New AWS enterprise generative AI tools: AgentCore, Nova Act, and Strands SDK

Deveshi Dabbawala

August 12, 2025
Read more
ML

The evolution of machine learning in 2025

Siddharth Menon

August 8, 2025
Read more