The dark side of AI-powered toys | Explained 

Share This Post


Generative AI is now part of children’s toys. These novel AI-powered toy “companions” are available on popular e-commerce platforms. Their makers claim the toys help educate children. But experts warn that such toys could impact a child’s healthy development.

How do AI toys work?

AI toys require internet connectivity to work and can take on the form-factor of plush aliens, fluffy animals, or friendly-faced robots. Some robotic AI toys can move independently, while those in the form of stuffed animals are meant to be carried around by their owners.

Many of these AI toys come with embedded microphones that listen to children, to formulate replies. Their makers promote them as products that offer educational answers, give emotional support, guide children through tasks or games, teach them new skills, and return compliments.

Take, for example, Grem by Curio. Shaped like a cute, blue alien creature with large eyes, the website noted that the plush toy was voiced and designed by the singer Grimes. It does not require a subscription.

Meanwhile, many Amazon.com reviews for the ‘Miko 3 AI Robot for Kids’ toy focused on how children were fascinated by the robot, with one buyer noting that it could play hide-and-seek with kids, cheer them up, and serve as an intercom for parents. Others complained about the subscription price and poor battery life.

Multiple AI toy makers on Amazon claim that the models powering these interactions are sourced through reputed providers, such as OpenAI’s ChatGPT, and that there are safety measures in place to ensure they maintain children’s privacy and do not initiate inappropriate conversation topics.

These AI toys are starkly different from toys that make the same sounds over and over again when a button is pressed, or those that can only mimic children’s voices and offer a limited set of replies. AI toys are also expensive.

Why are experts warning parents about AI toys?

AI toys made headlines in late 2025 when the advocacy group US PIRG Education Fund reported that Singapore-based FoloToy’s Kumma bear for children encouraged sexual conversations, and also allegedly told users how to access dangerous objects.

The toy previously used OpenAI’s GPT 4o, but the AI company later suspended the developer and the company moved to another tech provider, according to the group.

Common Sense Media, a tech and media ratings organisation, also outlined the risks of AI toys in a report on January 22. AI toys or “companions” that look cute and soft on the outside but use voice-based chatbots to engage children come with “unacceptable risks,” per the organisation.

Common Sense Media cited risk factors that included unhealthy emotional attachments to AI toys, children’s extremely private data being collected, and cases of chatbots not working reliably. Curio’s Grem and Miko 3 were two of the toys it tested.

The organisation pointed out that children aged five or under cannot properly tell humans apart from AI, so such AI toys could harm how these children develop and recognise key relationships. Even older children between ages six and 12 who might understand that AI isn’t real could still use the toys as a replacement for healthy human connections, per the report.

The organisation listed a slew of concerning factors affecting not just children, but their families as well.

For example, AI toys that use subscriptions could harm children if they become dependent on the toys for emotional comfort while parents are unable to keep up with payments.

Furthermore, about 27% of AI toy outputs were inappropriate for children, as per CMC’s testing, and included content about self-harm, drugs, and unsafe roleplay, apart from going into mature topics and sharing risky advice.

“This happens because the underlying AI models are trained on adult internet content, and child-safety layers are added after the fact. These filters are imperfect, and content designed for adults or teens can leak through when children ask questions in unexpected ways,” noted the organisation.

Furthermore, Common Sense Media stressed the importance of giving children privacy during difficult moments and teaching them to cope with everyday frustrations in healthy ways, instead of distracting them with an always-cheerful toy.

The organisation reported that parental insight tools for these AI toys were largely inadequate, and that children’s private interactions were possibly being shared with third parties for further AI training purposes.

Due to hallucination, the AI toys could also provide incorrect responses and confuse their young users, per the report.

How should parents and caretakers treat AI toys?

Common Sense Media acknowledged that AI toys offered some benefits such as stimulating children’s learning, correcting potential bad behaviour, or telling customised stories to young users. The makers of these AI toys also point out that they offer a screen-free playtime experience for children, while encouraging learning and communication.

However, Common Sense Media recommended that parents engage with their children directly and guide them towards more traditional learning experiences, such as non-AI toys, books, museum visits, playdates, family game nights, imaginative play, and art.

“Human interaction is developmentally essential. No AI toy can replace the benefits of reading together with a parent, playing pretend with siblings, building with friends, or learning from teachers and caregivers,” stated Common Sense Media.

“These messy, complex human interactions are where real development happens. AI toys at best supplement these experiences and at worst replace them—and replacement is the greater risk,” it added.

Published – January 31, 2026 04:45 pm IST



Source link

spot_img

Related Posts

Groww among bidders for Prudential’s India asset management firm

Prudential Financial's India asset manager has received offers...

Jeffrey Epstein Had a ‘Personal Hacker,’ Informant Claims

As the standoff between the United States government...

Crypto’s ‘digital gold’ myth exposed as traders pivot to metals

Bitcoin’s “digital gold” promise is unravelling as traders...

Access Denied

Access Denied You don't have permission to access...

Open-source AI models vulnerable to criminal misuse, researchers warn

Hackers and other criminals can easily commandeer computers...

Axiom wins fifth private astronaut mission to space station

ORLANDO, Fla. — NASA has selected Axiom Space...
spot_img