Models
September 1, 2025

Anthropic shifts privacy stance, lets users share data for AI training

Anthropic now asks user permission to train its AI models using chat data defaulting to data-sharing unless opted out with extended five-year retention, applying only to individual (not enterprise) plans

Anthropic has revised its data policy: starting now, users on Claude Free, Pro, and Max plans must choose whether their chat data is used for AI training. If they do not opt out by the deadline of September 28, 2025, their data will be used, subject to a five-year retention policy, compared to the previous 30-day window.

The change does not affect enterprise or API users. This marks a shift from Anthropic’s earlier privacy-first model.

Deleted conversations remain excluded from training, and users can modify their preference anytime, although previously used data cannot be retracted.

#
Anthropic

Read Our Content

See All Blogs
AWS

Day 4 at AWS re:Invent: Experience-Based Acceleration (EBA) partners announced and a big bang close

Deveshi Dabbawala

December 4, 2025
Read more
AWS

Privacy safe synthetic ML data generation with AWS Clean Rooms

Sharan Sundar Sankaran

December 3, 2025
Read more