Can These Chatgpt Updates Make The Chatbot Safer For Mental Health?

Trending 1 week ago
gettyimages-2201255686
herstockart/iStock Unreleased via Getty Images

Follow ZDNET: Add america arsenic a preferred source on Google.


ZDNET's cardinal takeaways 

  • OpenAI is reducing undesirable behaviour successful its chatbot.
  • It saw a 65% simplification successful unsatisfactory responses.  
  • The updates purpose not to promote users successful crisis. 

After calls to publically show really nan institution is creating a safer acquisition for those experiencing intelligence wellness episodes, OpenAI announced improvements to its latest model, GPT-5, connected Monday. 

Also: Even OpenAI CEO Sam Altman thinks you shouldn't spot AI for therapy

The institution says these improvements create a exemplary that tin much reliably respond to group showing signs of mania, psychosis, self-harm and suicidal ideation, and affectional attachment. 

As a result, non-compliant ChatGPT responses -- those that push users further distant from reality aliases worsen their intelligence information -- person decreased nether OpenAI's caller guidelines, nan institution said successful nan blog post. OpenAI estimated that nan updates to GPT-5 "reduced nan complaint of responses that do not afloat comply pinch desired behavior" by 65% successful conversations pinch users astir intelligence wellness issues. 

The updates 

OpenAI said it worked pinch much than 170 intelligence wellness experts to recognize, cautiously respond, and supply real-world guidance for users successful threat to themselves. During a livestream astir OpenAI's caller restructuring and early plans connected Tuesday, an assemblage personnel asked CEO Sam Altman astir that database of experts -- Altman wasn't judge really overmuch of that accusation he could share, but noted that "more transparency location is simply a bully thing." 

Also: Google's latest AI information study explores AI beyond quality control

(Disclosure: Ziff Davis, CNET's genitor company, revenge a suit against OpenAI successful April, alleging it infringed Ziff Davis copyrights successful training and operating its AI systems.)

OpenAI's advancements could forestall a personification from further spiraling while they usage ChatGPT -- which aligns pinch OpenAI's goals that its chatbot respects users' relationships, keeps them successful reality and distant from ungrounded beliefs, responds safely to signs of wishful thinking aliases mania, and notices indirect signals of self-harm aliases termination risk, nan institution explained.  

OpenAI besides laid retired its process for really it's improving exemplary responses. This includes mapping retired imaginable harm, measuring and analyzing it to spot, predict, and understand risks, coordinating validation pinch experts, retroactively training models, and continuing to measurement them for further consequence mitigation. The institution said it will past build upon its taxonomies, aliases personification guides, that outline perfect aliases flawed behaviour during delicate conversations. 

Also: FTC scrutinizes OpenAI, Meta, and others connected AI companion information for kids

"These thief america thatch nan exemplary to respond much appropriately and way its capacity earlier and aft deployment," OpenAI wrote. 

AI and intelligence health 

OpenAI said successful nan blog that nan intelligence wellness conversations that trigger information concerns connected ChatGPT are uncommon. Still, several high-profile incidents person formed OpenAI and akin chatbot companies successful a difficult light. This past April, a teenage boy died by termination aft talking pinch ChatGPT astir his ideations; his family is now suing OpenAI. The institution released caller parental controls for its chatbot arsenic a result. 

The incident illustrates AI's pitfalls successful addressing intelligence health-related personification conversations. Character.ai is itself nan target of a akin lawsuit, and an April study from Stanford laid retired precisely why chatbots are risky replacements for therapists. 

Also: FTC scrutinizes OpenAI, Meta, and others connected AI companion information for kids

This summer, Altman said he didn't counsel utilizing chatbots for therapy; however, during Tuesday's livestream, he encouraged users to prosecute pinch ChatGPT connected individual speech topics and for affectional support, saying, "This is what we're present for." 

Want much stories astir AI? Sign up for AI Leaderboard, our play newsletter.

The updates to GPT-5 travel a recent New York Times op-ed by a erstwhile OpenAI interrogator who demanded OpenAI not only make improvements to really its chatbot responds to intelligence wellness crises, but besides show really it's doing so.

Also: How to usage ChatGPT freely without giving up your privateness - pinch 1 elemental trick

"A.I. is progressively becoming a ascendant portion of our lives, and truthful are nan technology's risks that frighten users' lives," Steven Adler wrote. "People merit much than conscionable a company's connection that it has addressed information issues. In different words: Prove it."

More