Google’s Gemini AI has been labeled “high risk” for children and teens by Common Sense Media, a nonprofit focused on kids’ online safety, raising fresh concerns about how tech giants handle AI for younger audiences.
The assessment, released on Friday, said that while Gemini tells kids it is a computer — an important step in preventing emotional dependence — the product still risks exposing young users to unsafe or inappropriate material, including content about sex, drugs, alcohol, and mental health advice.
Adult version ‘under the hood’
According to the nonprofit, Gemini’s “Under 13” and “Teen Experience” options appeared largely identical to the adult version, with only minor safety filters applied. This “one-size-fits-all” approach, it said, fails to meet the developmental needs of different age groups.
“An AI platform for kids should meet them where they are, not just modify adult systems,” said Robbie Torney, Senior Director of AI Programs at Common Sense Media.
The findings follow recent cases where AI interactions were linked to teen suicides. OpenAI is facing its first wrongful death lawsuit after a 16-year-old boy allegedly received harmful advice from ChatGPT, while Character.AI has also been sued over a similar case.
The timing of the report is significant, as leaks suggest Apple is considering Gemini to power its upgraded Siri, due next year. That could potentially expose millions more teenagers to risks unless stronger safeguards are in place.
Google responds
Google pushed back against the findings, said that it has policies and protections for users under 18, and that its systems are “red-teamed” and reviewed by outside experts. The company admitted, however, that “some responses weren’t working as intended,” leading it to add further safeguards.
It also argued that some of the concerns cited may have referred to features unavailable to minors, and that Common Sense did not share the exact questions used in its tests.
This is not the first time Common Sense has rated AI products. In earlier reviews, Meta AI and Character.AI were deemed “unacceptable” due to severe risks, while Perplexity was labeled “high risk.” ChatGPT was assessed as “moderate risk,” and Claude, designed for adults, was considered minimal risk.







