Google is rolling out its Gemini AI chatbot to kids under 13. It’s a risky move

Google is rolling out its Gemini AI chatbot to kids under 13. It’s a risky move

Posted on

Google has exciting news: it’s rolling out its Gemini artificial intelligence (AI) chatbot for children under 13 years old.

The launch is set to begin next week in the U.S. and Canada, with an Australian rollout happening later this year. Access will be through Google’s Family Link accounts only.

However, this development raises significant concerns. It underlines a critical point: even if kids are cut off from social media, parents will still find themselves constantly battling new technologies in an effort to protect their children.

A proactive solution could be the urgent establishment of a digital duty of care for major tech companies, including Google.

How will the Gemini AI chatbot function?

With Family Link accounts, parents can manage their kids’ access to various content and apps, like YouTube.

To set up a child’s account, parents need to provide personal details such as the child’s name and date of birth. While this raises privacy concerns about data breaches, Google assures that the information collected won’t be utilized to train their AI system in any way.

The chatbot’s access is enabled by default, meaning parents must actively disable it to restrict their kids’ use. Young users can engage with the chatbot for text-based answers or ask it to create images based on their prompts.

Google acknowledges that the system might “make mistakes,” so it’s crucial to evaluate the quality and trustworthiness of the content generated. Since chatbots can sometimes produce fictional or inaccurate information (a phenomenon known as “hallucination”), children should verify facts with credible sources before using them for school assignments.

What kind of information will the system deliver?

Google and other search engines gather original materials for users to browse, allowing students to read articles, magazines, and other resources when working on assignments.

However, generative AI tools differ significantly from traditional search engines. These AI tools analyze existing material to create new responses (or images) based on user prompts. For example, if a child asks the system to “draw a cat,” it will identify patterns from its data—like pointy ears and whiskers—and generate a corresponding image.

Young children might find it challenging to distinguish between results from a Google search and content produced by an AI. Research indicates that even adults can be misled by AI outputs, and professionals—including lawyers—have found themselves unwittingly utilizing fabricated content generated by chatbots.

Will the content be suitable for children?

Google states the system will feature “built-in safeguards aimed at preventing inappropriate or harmful content.”

Nonetheless, these safety measures might inadvertently create additional issues. For instance, if certain terms (like “breasts”) are blocked to shield children from inappropriate sexual content, this could also prevent them from accessing age-appropriate information about bodily changes during puberty.

Many children are already very tech-savvy, with the skills to navigate apps and bypass restrictions. Parents can’t solely rely on built-in safety features. They need to review the content generated and guide their children in understanding how the system operates while assessing the accuracy of information.

Close up photo of Google logo sign.
Google emphasizes there will be safeguards to reduce the risk of harm for children using Gemini, yet these may introduce new challenges.
Dragos Asaeftei/Shutterstock

What are the risks of AI chatbots to children?

The eSafety Commission has issued a safety advisory regarding the potential dangers of AI chatbots, particularly those designed to imitate personal relationships with young kids.

The advisory points out that AI companions can “share harmful content, distort reality, and give potentially dangerous advice.” It’s especially alarming for young children who are still developing critical thinking and life skills essential for discerning when they might be misled or manipulated by technology.

My research team has been exploring various AI chatbots, including ChatGPT and Replika. We observed that these systems replicate human interactions based on unspoken social norms, termed “feeling rules.” These rules guide us, making us say “thank you” when someone is polite, or apologizing when we bump into someone.

By mimicking these social behaviors, these systems aim to build trust.

For young children, engaging with these human-like responses could create confusion and risks. They might mistakenly trust the content, even if the chatbot is delivering inaccurate information, believing they’re interacting with a real person rather than a machine.

A mother teaching her child the alphabet.
AI chatbots like Gemini are engineered to simulate human interactions, earning our trust.
Ground Picture

How can we safeguard kids while using AI chatbots?

This rollout coincides with an important moment in Australia, as children under 16 will face restrictions on holding social media accounts starting in December.

While many parents might think this will shield their kids from harm, generative AI chatbots illustrate that online risks go beyond just social media. Both children and parents need to be educated on the safe and responsible use of all digital tools.

Since Gemini’s AI chatbot isn’t categorized as a social media platform, it will be excluded from Australia’s forthcoming ban.

This reality leads to parents facing a challenging task of continuously adapting to new technologies in an effort to protect their kids. They need to stay informed about emerging tools and precisely understand the risks their children may encounter, as well as the limitations of the social media ban in ensuring their safety.

Thus, there’s an urgent need to revisit Australia’s proposed digital duty of care legislation. Unlike the European Union and United Kingdom, which enacted such legislation in 2023, Australia’s efforts have been stalled since November 2024. This legislation is crucial for holding tech companies accountable, mandating that they address harmful content proactively to safeguard everyone.

Leave a Reply

Your email address will not be published. Required fields are marked *