ANTHROPIC
Anthropic’s new update puts you in control
Anthropic is making changes to how it trains its AI models, and it involves user data.
From now on, chats and coding sessions may be used to improve Claude, unless you choose to opt out.
The company is also extending its data retention period to five years for those who stay opted in.
All Claude users need to make a decision by 28 September 2025.
If you click “Accept” now, Anthropic will immediately start using your future chats and coding sessions to train its models and keep that data for up to five years.
Past chats won’t be included unless you choose to resume them.
The update applies to Claude Free, Pro, and Max plans, but not to Anthropic’s enterprise products like Claude Gov, Claude for Work, Claude for Education, or API use through platforms such as Amazon Bedrock and Google Cloud Vertex AI.
New users will be prompted during sign-up, while existing users will see a pop-up asking them to decide.
The toggle allowing Anthropic to use your data is on by default, so anyone clicking “Accept” without checking could automatically opt in.
If you’d rather not share your data, you can turn the toggle off when you see the pop-up.
What to know:
Make your choice by 28 September 2025.
The data-sharing toggle is on by default.
Opting out only applies to future data, not past chats.
Who’s in and who’s out
If you’ve already clicked “Accept” without realising, you can change your choice anytime via Settings → Privacy → Privacy Settings → Help improve Claude.
Just note that any new preference only affects future data, anything already used for training can’t be removed.
Anthropic says it uses tools and automated processes to filter sensitive information and has confirmed it doesn’t sell user data to third parties.
One wrong click and suddenly you’re part of AI history.