Sam Altman Warns There’s No Legal Confidentiality When Using Chatgpt As A Therapist

Trending 3 months ago

ChatGPT users whitethorn want to deliberation doubly earlier turning to their AI app for therapy aliases different kinds of affectional support. According to OpenAI CEO Sam Altman, nan AI manufacture hasn’t yet figured retired really to protect personification privateness erstwhile it comes to these much delicate conversations, because there’s nary doctor-patient confidentiality erstwhile your doc is an AI.

The exec made these comments connected a recent episode of Theo Von’s podcast, This Past Weekend w/ Theo Von.

In consequence to a mobility astir really AI useful pinch today’s ineligible system, Altman said 1 of nan problems of not yet having a ineligible aliases argumentation model for AI is that there’s nary ineligible confidentiality for users’ conversations.

“People talk astir nan astir individual sh** successful their lives to ChatGPT,” Altman said. “People usage it — young people, especially, usage it — arsenic a therapist, a life coach; having these narration problems and [asking] ‘what should I do?’ And correct now, if you talk to a therapist aliases a lawyer aliases a expert astir those problems, there’s ineligible privilege for it. There’s doctor-patient confidentiality, there’s ineligible confidentiality, whatever. And we haven’t figured that retired yet for erstwhile you talk to ChatGPT.”

This could create a privateness interest for users successful nan lawsuit of a lawsuit, Altman added, because OpenAI would beryllium legally required to nutrient those conversations today.

“I deliberation that’s very screwed up. I deliberation we should person nan aforesaid conception of privateness for your conversations pinch AI that we do pinch a therapist aliases immoderate — and nary 1 had to deliberation astir that moreover a twelvemonth ago,” Altman said.

The institution understands that nan deficiency of privateness could beryllium a blocker to broader personification adoption. In summation to AI’s request for truthful overmuch online information during nan training period, it’s being asked to nutrient information from users’ chats successful immoderate ineligible contexts. Already, OpenAI has been fighting a tribunal order successful its suit pinch The New York Times, which would require it to prevention nan chats of hundreds of millions of ChatGPT users globally, excluding those from ChatGPT Enterprise customers.

Techcrunch event

San Francisco | October 27-29, 2025

In a connection connected its website, OpenAI said it’s appealing this order, which it called “an overreach.” If nan tribunal could override OpenAI’s ain decisions astir information privacy, it could unfastened nan institution up to further request for ineligible find aliases rule enforcement purposes. Today’s tech companies are regularly subpoenaed for personification information successful bid to assistance successful criminal prosecutions. But successful much caller years, location person been further concerns astir integer information arsenic laws began limiting entree to antecedently established freedoms, for illustration a woman’s correct to choose.

When nan Supreme Court overturned Roe v. Wade, for example, customers began switching to much backstage period-tracking apps aliases to Apple Health, which encrypted their records.

Altman asked nan podcast big astir his ain ChatGPT usage, arsenic well, fixed that Von said he didn’t talk to nan AI chatbot overmuch owed to his ain privateness concerns.

“I deliberation it makes consciousness … to really want nan privateness clarity earlier you usage [ChatGPT] a batch — for illustration nan ineligible clarity,” Altman said.

Sarah has worked arsenic a newsman for TechCrunch since August 2011. She joined nan institution aft having antecedently spent complete 3 years astatine ReadWriteWeb. Prior to her activity arsenic a reporter, Sarah worked successful I.T. crossed a number of industries, including banking, unit and software.

More