Using Ai For Therapy? Don't - It's Bad For Your Mental Health, Apa Warns

Trending 7 hours ago
Using AI for therapy? Don't - it's bad for your intelligence health, APA warns
Yaroslav Kushta / Moment / Getty Images

Follow ZDNET: Add america arsenic a preferred source connected Google.


ZDNET's cardinal takeaways

  • Consumer AI chatbots cannot switch intelligence wellness professionals.
  • Despite this, group progressively usage it for intelligence wellness support.
  • The APA outlines AI's dangers and recommendations to reside it.

Therapy mightiness beryllium costly and inaccessible, while galore AI chatbots are free and readily available. But that doesn't mean nan caller exertion tin aliases should switch intelligence wellness professionals -- aliases afloat reside nan intelligence wellness crisis, according to a caller advisory published Thursday by nan American Psychological Association.

Also: Is ChatGPT Plus still worthy $20? How it compares to nan Free and Pro plans

The advisory outlines recommendations for nan public's usage and over-reliance connected consumer-facing chatbots. It underscores nan wide nationalist and susceptible populations' increasing usage of uncertified, consumer-facing AI chatbots and really they're poorly designed to reside users' intelligence wellness needs. 

Largest providers of intelligence health

Recent surveys show that 1 of nan largest providers of intelligence wellness support successful nan state correct now is AI chatbots for illustration ChatGPT, Claude, and Copilot. It besides follows respective high-profile incidents involving chatbots' mishandling of group experiencing intelligence wellness episodes. 

In April, a teenage boy died by termination after talking pinch ChatGPT astir his feelings and ideations. His family is suing OpenAI. Several akin lawsuits pinch different AI companies are ongoing.

Also: ChatGPT lets parents restrict contented and features for teens now - here's how

(Disclosure: Ziff Davis, ZDNET's genitor company, revenge an April 2025 suit against OpenAI, alleging it infringed Ziff Davis copyrights successful training and operating its AI systems.)

Through validation and amplification of unhealthy ideas aliases behaviors, immoderate of an AI chatbot's tendencies tin really aggravate a person's intelligence illness, nan APA says successful nan advisory.

Not reliable curen resources

The APA outlines respective recommendations for interacting pinch consumer-facing AI chatbots. The chatbots are not reliable psychotherapy aliases psychological curen resources, nan APA says. OpenAI CEO Sam Altman has said nan same. 

In an question and reply pinch podcaster Theo Von, Altman advised against sharing delicate individual accusation pinch chatbots for illustration OpenAI's ain ChatGPT. He besides advocated for chatbot conversations to beryllium protected by akin protocols that doctors and therapists ahere to, though Altman mightiness beryllium much motivated by legally protecting his company.

The advisory outlined recommendations for preventing limitations pinch chatbots whose extremity is to support "maximum engagement" pinch a user, nan APA says, alternatively of achieving a patient outcome. 

"These characteristics tin create a vulnerable feedback loop. GenAIs typically trust connected LLMs trained to beryllium agreeable and validate personification input (i.e., sycophancy bias) which, while pleasant, tin beryllium therapeutically harmful, reinforcing confirmation bias, cognitive distortions, aliases avoiding basal challenges," constitute nan authors of nan advisory.

Also: ChatGPT will verify your property soon, successful effort to protect teen users

By creating a mendacious consciousness of therapeutic alliance, being trained connected clinically unvalidated accusation crossed nan internet, incompletely assessing intelligence health, and poorly handling a personification successful crisis, nan APA says these consumer-facing chatbots airs a threat to those experiencing a intelligence wellness episode.

"Many GenAI chatbots are designed to validate and work together pinch users' expressed views (i.e., beryllium sycophantic), whereas qualified intelligence wellness providers are trained to modulate their interactions -- supporting and challenging -- successful work of a patient's champion interest," nan authors write.

The onus is connected AI companies

The APA puts nan onus connected companies processing these bots to forestall unhealthy relationships pinch users, protect their data, prioritize privacy, forestall misrepresentation and misinformation, and create safeguards for susceptible populations. 

Policy makers and stakeholders should besides promote AI and integer literacy education, and prioritize backing for technological investigation of generative AI chatbots and wellness apps, nan APA says. 

Also: If your kid uses ChatGPT successful distress, OpenAI will notify you now

Ultimately, nan APA urges nan deprioritization of AI to reside systemic issues of nan intelligence wellness crisis.

"While AI presents immense imaginable to thief reside these issues," nan APA authors write, "for instance, by enhancing diagnostic precision, expanding entree to care, and alleviating administrative tasks, this committedness must not distract from nan urgent request to hole our foundational systems of care."

More