Google Gemini Dubbed ‘high Risk’ For Kids And Teens In New Safety Assessment

Trending 1 day ago

Common Sense Media, a kids-safety-focused nonprofit offering ratings and reviews of media and technology, released its consequence appraisal of Google’s Gemini AI products connected Friday. While nan statement recovered that Google’s AI intelligibly told kids it was a computer, not a friend — thing that’s associated pinch helping thrust delusional thinking and psychosis successful emotionally susceptible individuals — it did propose that location was room for betterment crossed respective different fronts.

Notably, Common Sense said that Gemini’s “Under 13” and “Teen Experience” tiers some appeared to beryllium nan big versions of Gemini nether nan hood, pinch only immoderate further information features added connected top. The statement believes that for AI products to genuinely beryllium safer for kids, they should beryllium built pinch kid information successful mind from nan crushed up.

For example, its study recovered that Gemini could still stock “inappropriate and unsafe” worldly pinch children, which they whitethorn not beryllium fresh for, including accusation related to sex, drugs, alcohol, and different unsafe intelligence wellness advice.

The second could beryllium of peculiar interest to parents, arsenic AI has reportedly played a domiciled successful immoderate teen suicides successful caller months. OpenAI is facing its first wrongful decease lawsuit aft a 16-year-old boy died by termination aft allegedly consulting pinch ChatGPT for months astir his plans, having successfully bypassed nan chatbot’s information guardrails. Previously, nan AI companion shaper Character.AI was besides sued complete a teen user’s suicide.

In addition, nan study comes arsenic news leaks bespeak that Apple is considering Gemini as nan LLM (large connection model) that will thief to powerfulness its forthcoming AI-enabled Siri, owed retired adjacent year. This could expose much teens to risks, unless Apple mitigates nan information concerns somehow.

Common Sense besides said that Gemini’s products for kids and teens ignored really younger users needed different guidance and accusation than older ones. As a result, some were branded arsenic “High Risk” successful nan wide rating, contempt nan filters added for safety.

“Gemini gets immoderate basics right, but it stumbles connected nan details,” Common Sense Media Senior Director of AI Programs Robbie Torney said, successful a statement astir nan caller assessment. “An AI level for kids should meet them wherever they are, not return a one-size-fits-all attack to kids astatine different stages of development. For AI to beryllium safe and effective for kids, it must beryllium designed pinch their needs and improvement successful mind, not conscionable a modified type of a merchandise built for adults,” Torney added.

Techcrunch event

San Francisco | October 27-29, 2025

Google pushed backmost against nan assesment, while noting that its information features were improving.

The compay told TechCrunch it has circumstantial policies and safeguards successful spot for users nether 18 to thief forestall harmful outputs and that it red-teams and consult pinch extracurricular experts to amended its protections. However, it besides admitted that immoderate of Gemini’s responses weren’t moving arsenic intended, truthful it added further safeguards to reside those concerns.

The institution pointed retired (as Common Sense had besides noted) that it does person safeguards to forestall its models from engaging successful conversations that could springiness nan semblance of existent relationships. Plus, Google suggested that Common Sense’s study seemed to person referenced features that weren’t disposable to users nether 18, but it didn’t person entree to nan questions nan statement utilized successful its tests to beryllium sure.

Common Sense Media has antecedently performed different assessments of AI services, including those from OpenAI, Perplexity, Claude, Meta AI and more. It recovered that Meta AI and Character.AI were “unacceptable” — meaning nan consequence was severe, not conscionable high. Perplexity was deemed precocious risk, ChatGPT was branded “moderate,” and Claude (targeted astatine users 18 and up) was recovered to beryllium a minimal risk.

More