Models
September 1, 2025

Anthropic shifts privacy stance, lets users share data for AI training

Anthropic now asks user permission to train its AI models using chat data defaulting to data-sharing unless opted out with extended five-year retention, applying only to individual (not enterprise) plans

Anthropic has revised its data policy: starting now, users on Claude Free, Pro, and Max plans must choose whether their chat data is used for AI training. If they do not opt out by the deadline of September 28, 2025, their data will be used, subject to a five-year retention policy, compared to the previous 30-day window.

The change does not affect enterprise or API users. This marks a shift from Anthropic’s earlier privacy-first model.

Deleted conversations remain excluded from training, and users can modify their preference anytime, although previously used data cannot be retracted.

#
Anthropic

Read Our Content

See All Blogs
AI safety

Anthropic's AI agents just outpaced human researchers in safety tests

Deveshi Dabbawala

April 16, 2026
Read more
Gen AI

Anthropic’s Claude Managed Agents platform accelerates AI agent deployment for teams

Deveshi Dabbawala

April 9, 2026
Read more