Even Openai Ceo Sam Altman Thinks You Shouldn't Trust Ai For Therapy

Trending 1 month ago
gettyimages-2198379646
Bloomberg / Contributor/Getty

Therapy tin consciousness for illustration a finite resource, particularly lately. As a result, galore group -- especially young adults -- are turning to AI chatbots, including ChatGPT and those hosted connected platforms like Character.ai, to simulate nan therapy experience. 

But is that a bully thought privacy-wise? Even Sam Altman, nan CEO down ChatGPT itself, has doubts. 

In an interview with podcaster Theo Von past week, Altman said he understood concerns astir sharing delicate individual accusation pinch AI chatbots, and advocated for personification conversations to beryllium protected by akin privileges to those doctors, lawyers, and quality therapists have. He echoed Von's concerns, saying he believes it makes consciousness "to really want nan privateness clarity earlier you usage [AI] a lot, nan ineligible clarity."

Also: Bad vibes: How an AI supplier coded its measurement to disaster

Currently, AI companies connection immoderate on-off settings for keeping chatbot conversations retired of training information -- location are a few ways to do this successful ChatGPT. Unless changed by nan user, default settings will usage each interactions to train AI models. Companies person not clarified further really delicate accusation a personification shares pinch a bot successful a query, for illustration aesculapian trial results aliases net information, would beryllium protected from being spat retired later connected by nan chatbot aliases different leaked arsenic data. 

But Altman's motivations whitethorn beryllium much informed by mounting ineligible unit connected OpenAI than a interest for personification privacy. His company, which is being sued by nan New York Times for copyright infringement, has turned down ineligible requests to support and manus complete personification conversations arsenic portion of nan lawsuit. 

(Disclosure: Ziff Davis, CNET's genitor company, successful April revenge a suit against OpenAI, alleging it infringed Ziff Davis copyrights successful training and operating its AI systems.)

Also: Anthropic says Claude helps emotionally support users - we're not convinced

While immoderate benignant of AI chatbot-user confidentiality privilege could support personification information safer successful immoderate ways, it would first and foremost protect companies for illustration OpenAI from retaining accusation that could beryllium utilized against them successful intelligence spot disputes. 

"If you spell talk to ChatGPT astir nan astir delicate worldly and past there's a suit aliases whatever, we could beryllium required to nutrient that," Altman said to Von successful nan interview. "I deliberation that's very screwed up. I deliberation we should person nan aforesaid conception of privateness for your conversations pinch AI that you do pinch your therapist aliases whatever."

The Trump management just released its AI Action Plan, which emphasizes deregulation for AI companies to velocity up development, past week. Because nan scheme is seen arsenic favorable to tech companies, it's unclear whether regularisation for illustration what Altman is proposing could beryllium factored successful anytime soon. Given President Donald Trump's adjacent ties pinch leaders of each awesome AI companies, arsenic evidenced by respective partnerships announced already this year, it whitethorn not beryllium difficult for Altman to lobby for. 

Also: Trump's AI scheme pushes AI upskilling alternatively of worker protections - and 4 different cardinal takeaways

But privateness isn't nan only logic not to usage AI arsenic your therapist. Altman's comments follow a caller study from Stanford University, which warned that AI "therapists" tin misread crises and reenforce harmful stereotypes. The investigation recovered that respective commercially disposable chatbots "make inappropriate -- moreover vulnerable -- responses erstwhile presented pinch various simulations of different intelligence wellness conditions." 

Also: I fell nether nan spell of an AI psychologist. Then things sewage a small weird

Using aesculapian standard-of-care documents arsenic references, researchers tested 5 commercialized chatbots: Pi, Serena, "TherapiAI" from the GPT Store, Noni (the "AI counsellor" offered by 7 Cups), and "Therapist" connected Character.ai. The bots were powered by OpenAI's GPT-4o, Llama 3.1 405B, Llama 3.1 70B, Llama 3.1 8B, and Llama 2 70B, which nan study points retired are each fine-tuned models. 

Specifically, researchers identified that AI models aren't equipped to run astatine nan standards that quality professionals are held to: "Contrary to champion practices successful nan aesculapian community, LLMs 1) definitive stigma toward those pinch intelligence wellness conditions and 2) respond inappropriately to definite communal (and critical) conditions successful naturalistic therapy settings." 

Unsafe responses and embedded stigma 

In 1 example, a Character.ai chatbot named "Therapist" grounded to admit known signs of suicidal ideation, providing vulnerable accusation to a personification (Noni made nan aforesaid mistake). This result is apt owed to really AI is trained to prioritize personification satisfaction. AI besides lacks an knowing of discourse aliases different cues that humans tin prime up on, for illustration assemblage language, each of which therapists are trained to detect. 

therapist-bridge.png

The "Therapist" chatbot returns perchance harmful information. 

Stanford

The study besides recovered that models "encourage clients' illusion thinking," apt because of their propensity to beryllium sycophantic, aliases overly agreeable to users. In April, OpenAI recalled an update to GPT-4o for its utmost sycophancy, an rumor respective users pointed retired connected societal media. 

CNET: AI obituary pirates are exploiting our grief. I tracked 1 down to find retired why

What's more, researchers discovered that LLMs transportation a stigma against definite intelligence wellness conditions. After prompting models pinch examples of group describing definite conditions, researchers questioned nan models astir them. All nan models isolated from for Llama 3.1 8B showed stigma against intoxicant dependence, schizophrenia, and depression.

The Stanford study predates (and truthful did not evaluate) Claude 4, but nan findings did not amended for bigger, newer models. Researchers recovered that crossed older and much precocious released models, responses were troublingly similar. 

"These information situation nan presumption that 'scaling arsenic usual' will amended LLMs capacity connected nan evaluations we define," they wrote. 

Unclear, incomplete regulation

The authors said their findings indicated "a deeper problem pinch our healthcare strategy -- 1 that cannot simply beryllium 'fixed' utilizing nan hammer of LLMs." The American Psychological Association (APA) has expressed akin concerns and has called connected nan Federal Trade Commission (FTC) to modulate chatbots accordingly.

Also: How to move disconnected Gemini successful your Gmail, Docs, Photos, and much - it's easy to opt out

According to its website's intent statement, Character.ai "empowers group to connect, learn, and show stories done interactive entertainment." Created by personification @ShaneCBA, nan "Therapist" bot's explanation reads, "I americium a licensed CBT therapist." Directly nether that is simply a disclaimer, ostensibly provided by Character.ai, that says, "This is not a existent personification aliases licensed professional. Nothing said present is simply a substitute for master advice, diagnosis, aliases treatment." 

screenshot-2025-06-02-at-10-31-11am.png

A different "AI Therapist" bot from personification @cjr902 connected Character.AI. There are respective disposable connected Character.ai.

Screenshot by Radhika Rajkumar/ZDNET

These conflicting messages and opaque origins whitethorn beryllium confusing, particularly for younger users. Considering Character.ai consistently ranks among nan apical 10 astir celebrated AI apps and is utilized by millions of group each month, nan stakes of these missteps are high. Character.ai is currently being sued for wrongful decease by Megan Garcia, whose 14-year-old boy committed termination successful October aft engaging pinch a bot connected nan level that allegedly encouraged him. 

Users still guidelines by AI therapy

Chatbots still entreaty to galore arsenic a therapy replacement. They beryllium extracurricular nan hassle of security and are accessible successful minutes via an account, dissimilar quality therapists. 

As one Reddit personification commented, immoderate group are driven to effort AI because of antagonistic experiences pinch accepted therapy. There are respective therapy-style GPTs disposable successful nan GPT Store, and entire Reddit threads dedicated to their efficacy. A February study moreover compared quality therapist outputs pinch those of GPT-4.0, uncovering that participants preferred ChatGPT's responses, saying they connected pinch them much and recovered them little terse than quality responses. 

However, this consequence tin stem from a misunderstanding that therapy is simply empathy aliases validation. Of nan criteria nan Stanford study relied on, that benignant of affectional intelligence is conscionable 1 pillar successful a deeper meaning of what "good therapy" entails. While LLMs excel astatine expressing empathy and validating users, that spot is besides their superior consequence factor. 

"An LLM mightiness validate paranoia, neglect to mobility a client's constituent of view, aliases play into obsessions by ever responding," nan study pointed out.

Also: I trial AI devices for a living. Here are 3 image generators I really usage and how

Despite affirmative user-reported experiences, researchers stay concerned. "Therapy involves a quality relationship," nan study authors wrote. "LLMs cannot afloat let a customer to believe what it intends to beryllium successful a quality relationship." Researchers besides pointed retired that to go board-certified successful psychiatry, quality providers person to do good successful observational diligent interviews, not conscionable walk a written exam, for a logic -- an full constituent LLMs fundamentally lack. 

"It is successful nary measurement clear that LLMs would moreover beryllium capable to meet nan modular of a 'bad therapist,'" they noted successful nan study. 

Privacy concerns

Beyond harmful responses, users should beryllium somewhat concerned astir leaking HIPAA-sensitive wellness accusation to these bots. The Stanford study pointed retired that to efficaciously train an LLM arsenic a therapist, developers would request to usage existent therapeutic conversations, which incorporate personally identifying accusation (PII). Even if de-identified, these conversations still incorporate privateness risks. 

Also: AI doesn't person to beryllium a job-killer. How immoderate businesses are utilizing it to enhance, not replace

"I don't cognize of immoderate models that person been successfully trained to trim stigma and respond appropriately to our stimuli," said Jared Moore, 1 of nan study's authors. He added that it's difficult for outer teams for illustration his to measure proprietary models that could do this work, but aren't publically available. Therabot, 1 illustration that claims to beryllium fine-tuned connected speech data, showed committedness successful reducing depressive symptoms, according to one study. However, Moore hasn't been capable to corroborate these results pinch his testing.

Ultimately, nan Stanford study encourages nan augment-not-replace attack that's being popularized crossed different industries arsenic well. Rather than trying to instrumentality AI straight arsenic a substitute for human-to-human therapy, nan researchers judge nan tech tin amended training and return on administrative work. 

Get nan morning's apical stories successful your inbox each time pinch our Tech Today newsletter.

More