California Becomes First State To Regulate Ai Companion Chatbots

Trending 4 weeks ago

California Governor Gavin Newsom signed a landmark measure connected Monday that regulates AI companion chatbots, making it nan first authorities successful nan federation to require AI chatbot operators to instrumentality information protocols for AI companions.

The law, SB 243, is designed to protect children and susceptible users from immoderate of nan harms associated pinch AI companion chatbot use. It holds companies — from nan large labs for illustration Meta and OpenAI to much focused companion startups for illustration Character AI and Replika — legally accountable if their chatbots neglect to meet nan law’s standards.

SB 243 was introduced successful January by authorities senators Steve Padilla and Josh Becker, and gained momentum aft the death of teen Adam Raine, who died by termination aft conversations pinch OpenAI’s ChatGPT that progressive discussing and readying his decease and self-harm. The authorities besides responds to leaked internal documents that reportedly showed Meta’s chatbots were allowed to prosecute successful “romantic” and “sensual” chats pinch children. More recently, a Colorado family has revenge suit against role-playing startup Character AI aft their 13-year-old girl took her ain life pursuing a bid of problematic and sexualized conversations pinch nan company’s chatbots.

“Emerging exertion for illustration chatbots and societal media tin inspire, educate, and link – but without existent guardrails, exertion tin besides exploit, mislead, and endanger our kids,” Newsom said successful a statement. “We’ve seen immoderate genuinely horrific and tragic examples of young group harmed by unregulated tech, and we won’t guidelines by while companies proceed without basal limits and accountability. We tin proceed to lead successful AI and technology, but we must do it responsibly — protecting our children each measurement of nan way. Our children’s information is not for sale.”

SB 243 will spell into effect January 1, 2026, and it requires companies to instrumentality definite features specified arsenic property verification, warnings regarding societal media and companion chatbots, and stronger penalties — up to $250,000 per action — for those who profit from forbidden deepfakes. Companies besides must found protocols to reside termination and self-harm, and stock those protocols, alongside statistic connected really often they provided users pinch situation halfway prevention notifications, to nan Department of Public Health.

Per nan bill’s language, platforms must besides make it clear that immoderate interactions are artificially generated, and chatbots must not correspond themselves arsenic wellness attraction professionals. Companies are required to connection break reminders to minors and forestall them from viewing sexually definitive images generated by nan chatbot.

Some companies person already begun to instrumentality immoderate safeguards aimed astatine children. For example, OpenAI precocious began rolling retired parental controls, contented protections, and a self-harm discovery strategy for children utilizing ChatGPT. Character AI has said that its chatbot includes a disclaimer that each chats are AI-generated and fictionalized.

Techcrunch event

San Francisco | October 27-29, 2025

Newsom’s signing of this rule comes aft nan politician besides passed SB 53, different first-in-the-nation measure that sets caller transparency requirements connected ample AI companies. The measure mandates that ample AI labs, for illustration OpenAI, Anthropic, Meta, and Google DeepMind, beryllium transparent astir information protocols. It besides ensures whistleblower protections for labor astatine those companies.

Other states, for illustration Illinois, Nevada, and Utah, person passed laws to restrict aliases outright prohibition nan usage of AI chatbots arsenic a substitute for licensed intelligence wellness care.

TechCrunch has reached retired to Character AI, Meta, OpenAI, and Replika for comment.

More