Character Ai Is Ending Its Chatbot Experience For Kids

Trending 1 week ago

Teenagers are trying to fig retired wherever they fresh successful a world changing faster than immoderate procreation earlier them. They’re bursting pinch emotions, hyper-stimulated, and chronically online. And now, AI companies person fixed them chatbots designed to ne'er extremity talking. The results person been catastrophic.

One institution that understands this fallout is Character.AI, an AI role-playing startup that’s facing lawsuits and nationalist outcry aft astatine slightest two teenagers died by suicide pursuing prolonged conversations pinch AI chatbots connected its platform. Now, Character.AI is making changes to its level to protect teenagers and kids, changes that could impact nan startup’s bottommost line.

“The first point that we’ve decided arsenic Character.AI is that we will region nan expertise for nether 18 users to prosecute successful immoderate open-ended chats pinch AI connected our platform,” Karandeep Anand, CEO of Character.AI, told TechCrunch.  

Open-ended speech refers to nan unconstrained back-and-forth that happens erstwhile users springiness a chatbot a punctual and it responds pinch follow-up questions that experts say are designed to support users engaged. Anand argues this type of relationship — wherever nan AI acts arsenic a conversational partner aliases friend alternatively than a imaginative instrumentality — isn’t conscionable risky for kids, but misaligns pinch nan company’s vision.

The startup is attempting to pivot from “AI companion” to “role-playing platform.” Instead of chatting pinch an AI friend, teens will usage prompts to collaboratively build stories aliases make visuals. In different words, nan extremity is to displacement engagement from speech to creation. 

Character.AI will shape retired teen chatbot entree by November 25, starting pinch a two-hour regular limit that shrinks progressively until it hits zero. To guarantee this prohibition remains pinch nether 18 users, nan level will deploy an in-house property verification instrumentality that analyzes personification behavior, arsenic good arsenic third-party devices for illustration Persona. If those devices fail, Character.AI will usage facial nickname and ID checks to verify ages, Anand said. 

The move follows different teenager protections that Character.AI has implemented, including introducing a parental insights tool, filtered characters, constricted romanticist conversations, and clip spent notifications. Anand has told TechCrunch that those changes mislaid nan institution overmuch of their under-18 personification base, and he expects these caller changes to beryllium arsenic unpopular.  

Techcrunch event

San Francisco | October 27-29, 2025

“It’s safe to presume that a batch of our teen users astir apt will beryllium disappointed… truthful we do expect immoderate churn to hap further,” Anand said. “It’s difficult to estimate — will each of them afloat churn aliases will immoderate of them move to these caller experiences we’ve been building for nan past almost 7 months now?” 

As portion of Character.AI’s push to toggle shape nan level from a chat-centric app into a “full-fledged content-driven societal platform,” nan startup precocious launched respective caller entertainment-focused features.

In June, Character.AI rolled out AvatarFX, a video procreation exemplary that transforms images into animated videos; Scenes, an interactive, pre-populated storylines wherever users tin measurement into narratives pinch their favourite characters; and Streams, a characteristic that allows move interactions betwixt immoderate 2 characters. In August, Character.AI launched Community Feed, a societal provender wherever users tin stock their characters, scenes, videos, and different contented they make connected nan platform.  

In a connection addressed to users nether 18, Character.AI apologized for nan changes. 

“We cognize that astir of you use Character.AI to supercharge your productivity successful ways that enactment wrong nan bounds of our contented rules,” nan connection reads. “We do not return this measurement of removing open-ended Character chat lightly — but we do deliberation that it’s nan correct point to do fixed nan questions that person been raised astir really teens do, and should, interact pinch this caller technology.” 

“We’re not shutting down nan app for nether 18s,” Anand said. “We are only shutting down open-ended chats for nether 18s because we dream that nether 18 users migrate to these different experiences, and that those experiences get amended complete time. So doubling down connected AI gaming, AI short videos, AI storytelling successful general. That’s nan large stake we’re making to bring backmost nether 18s if they do churn.” 

Anand acknowledged that immoderate teens mightiness flock to different AI platforms, for illustration OpenAI, that let them to person open-ended conversations pinch chatbots. OpenAI has besides travel nether occurrence precocious aft a teenager took his ain life pursuing agelong conversations pinch ChatGPT.  

“I really dream america starring nan measurement sets a modular successful nan manufacture that for nether 18s, open-ended chats are astir apt not nan way aliases nan merchandise to offer,” Anand said. “For us, I deliberation nan tradeoffs are nan correct ones to make. I person a six-year-old, and I want to make judge she grows up successful a secure situation pinch AI successful a responsible way.” 

Character.AI is making these decisions earlier regulators unit its hand. On Tuesday, Sens. Josh Hawley (R-MO) and Richard Blumenthal (D-CT) said they would introduce legislation to prohibition AI chatbot companions from being disposable to minors, pursuing complaints from parents who said nan products pushed their children into intersexual conversations, self-harm, and suicide. Earlier this month, California became nan first authorities to regulate AI companion chatbots by holding companies accountable if their chatbots neglect to meet nan law’s information standards.  

In summation to those changes connected nan platform, Character.AI said it would found and money nan AI Safety Lab, an independent non-profit dedicated to innovating information alignment for nan early AI intermezo features.  

“A batch of activity is happening successful nan manufacture connected coding and improvement and different usage cases,” Anand said. “We don’t deliberation there’s capable activity yet happening connected nan agentic AI powering entertainment, and information will beryllium very captious to that.” 

More