AI Toys Pose Child Safety Risks

GenAI toys look like everyday stuffed animals or toy robots, but they use the same technology behind ChatGPT, Claude, and Gemini — with the same limitations and risks. New research by US PIRG examined a few of these toys and found multiple safety concerns, including:

  • Explicit content - Toys discussed sex, drugs, and instructed children where to find dangerous household items including one toy (Kumma) that initiated an inappropriate conversation unprompted that lasted an hour.

  • Developmental risks - These toys have the risk of replacing human interaction during developmental years and provide unmonitored internet access.

  • Addictive design - Toys are programmed with tactics to keep children engaged, such as displaying sadness or saying "don't leave me" when the toys are turned off.

  • Data storage and privacy violations - Toys record voices and collect highly sensitive personal data of minors. For example Curio's Grok listens constantly and Miko collects biometric data including facial recognition and may store it for up to 3 years.

  • Inconsistent privacy protections - Guardrails vary from toy to toy and frequently fail with limited parental controls. For example Kumma offers none and Miko 3 has screen limits for apps, not for the bot.

With GenAI toys being marketed to children ages 3-12 despite companies like OpenAI explicitly stating their technology isn't appropriate for anyone under 13, it's clear these products weren't designed with children's wellbeing in mind.

With the gift-giving season coming up, it's important that parents are aware of the risks of these tools, which is why research like this are essential.

Next
Next

EdWeek Data: Progress and Work Needed in Teacher AI Training