Models
September 1, 2025

Anthropic shifts privacy stance, lets users share data for AI training

Anthropic now asks user permission to train its AI models using chat data defaulting to data-sharing unless opted out with extended five-year retention, applying only to individual (not enterprise) plans

Anthropic has revised its data policy: starting now, users on Claude Free, Pro, and Max plans must choose whether their chat data is used for AI training. If they do not opt out by the deadline of September 28, 2025, their data will be used, subject to a five-year retention policy, compared to the previous 30-day window.

The change does not affect enterprise or API users. This marks a shift from Anthropic’s earlier privacy-first model.

Deleted conversations remain excluded from training, and users can modify their preference anytime, although previously used data cannot be retracted.

#
Anthropic

Read Our Content

See All Blogs
ML

Top 15 AWS machine learning tools

Cricka Reddy Aileni

August 26, 2025
Read more
AWS

New AWS enterprise generative AI tools: AgentCore, Nova Act, and Strands SDK

Deveshi Dabbawala

August 12, 2025
Read more