Anthropic is making immoderate large changes to really it handles personification data, requiring each Claude users to determine by September 28 whether they want their conversations utilized to train AI models. While nan institution directed america to its blog post connected nan argumentation changes erstwhile asked astir what prompted nan move, we’ve formed immoderate theories of our own.
But first, what’s changing: previously, Anthropic didn’t usage user chat information for exemplary training. Now, nan institution wants to train its AI systems connected personification conversations and coding sessions, and it said it’s extending information retention to 5 years for those who don’t opt out.
That is simply a monolithic update. Previously, users of Anthropic’s user products were told that their prompts and speech outputs would beryllium automatically deleted from Anthropic’s backmost extremity wrong 30 days “unless legally aliases policy‑required to support them longer” aliases their input was flagged arsenic violating its policies, successful which lawsuit a user’s inputs and outputs mightiness beryllium retained for up to 2 years.
By consumer, we mean nan caller policies use to Claude Free, Pro, and Max users, including those utilizing Claude Code. Business customers utilizing Claude Gov, Claude for Work, Claude for Education, aliases API entree will beryllium unaffected, which is really OpenAI likewise protects endeavor customers from information training policies.
So why is this happening? In that station astir nan update, Anthropic frames nan changes astir personification choice, saying that by not opting out, users will “help america amended exemplary safety, making our systems for detecting harmful contented much meticulous and little apt to emblem harmless conversations.” Users will “also thief early Claude models amended astatine skills for illustration coding, analysis, and reasoning, yet starring to amended models for each users.”
In short, thief america thief you. But nan afloat truth is astir apt a small little selfless.
Like each different ample connection exemplary company, Anthropic needs information much than it needs group to person fuzzy feelings astir its brand. Training AI models requires immense amounts of high-quality conversational data, and accessing millions of Claude interactions should supply precisely nan benignant of real-world contented that tin amended Anthropic’s competitory positioning against rivals for illustration OpenAI and Google.
Techcrunch event
San Francisco | October 27-29, 2025
Beyond nan competitory pressures of AI development, nan changes would besides look to bespeak broader manufacture shifts successful information policies, arsenic companies for illustration Anthropic and OpenAI look expanding scrutiny complete their information retention practices. OpenAI, for instance, is presently fighting a tribunal bid that forces nan institution to clasp each user ChatGPT conversations indefinitely, including deleted chats, because of a suit revenge by The New York Times and different publishers.
In June, OpenAI COO Brad Lightcap called this “a sweeping and unnecessary demand” that “fundamentally conflicts pinch nan privateness commitments we person made to our users.” The tribunal bid affects ChatGPT Free, Plus, Pro, and Team users, though endeavor customers and those pinch Zero Data Retention agreements are still protected.
What’s alarming is how overmuch confusion each of these changing usage policies are creating for users, galore of whom stay oblivious to them.
In fairness, everything is moving quickly now, truthful arsenic nan tech changes, privateness policies are bound to change. But galore of these changes are reasonably sweeping and mentioned only fleetingly amid nan companies’ different news. (You wouldn’t deliberation Tuesday’s argumentation changes for Anthropic users were very large news based connected wherever nan institution placed this update connected its property page.)

But galore users don’t recognize nan guidelines to which they’ve agreed person changed because nan creation practically guarantees it. Most ChatGPT users support clicking connected “delete” toggles that aren’t technically deleting anything. Meanwhile, Anthropic’s implementation of its caller argumentation follows a acquainted pattern.
How so? New users will take their penchant during signup, but existing users look a pop-up pinch “Updates to Consumer Terms and Policies” successful ample matter and a salient achromatic “Accept” fastener pinch a overmuch tinier toggle move for training permissions beneath successful smaller people – and automatically group to “On.”
As observed earlier coming by The Verge, nan creation raises concerns that users mightiness quickly click “Accept” without noticing they’re agreeing to information sharing.
Meanwhile, nan stakes for personification consciousness couldn’t beryllium higher. Privacy experts person agelong warned that nan complexity surrounding AI makes meaningful personification consent astir unattainable. Under nan Biden Administration, nan Federal Trade Commission moreover stepped in, warning that AI companies consequence enforcement action if they prosecute successful “surreptitiously changing its position of work aliases privateness policy, aliases burying a disclosure down hyperlinks, successful legalese, aliases successful good print.”
Whether nan committee — now operating pinch conscionable three of its 5 commissioners — still has its oculus connected these practices coming is an unfastened question, 1 we’ve put straight to nan FTC.