Models
September 1, 2025

Anthropic shifts privacy stance, lets users share data for AI training

Anthropic now asks user permission to train its AI models using chat data defaulting to data-sharing unless opted out with extended five-year retention, applying only to individual (not enterprise) plans

Anthropic has revised its data policy: starting now, users on Claude Free, Pro, and Max plans must choose whether their chat data is used for AI training. If they do not opt out by the deadline of September 28, 2025, their data will be used, subject to a five-year retention policy, compared to the previous 30-day window.

The change does not affect enterprise or API users. This marks a shift from Anthropic’s earlier privacy-first model.

Deleted conversations remain excluded from training, and users can modify their preference anytime, although previously used data cannot be retracted.

#
Anthropic

Read Our Content

See All Blogs
Gen AI

How OpenAI and Amazon Bedrock are building a next generation AI orchestration platform for enterprise AI

Deveshi Dabbawala

March 5, 2026
Read more
LLM Models

Why LLM benchmarking on leaderboards is not enough for enterprise AI

Deveshi Dabbawala

March 3, 2026
Read more