🚀 Trusted by 5,000+ Advertisers & Premium Publishers

Researchers urge stricter regulations for AI toys aimed at young children.

Everything seemed to be going smoothly. Charlotte, a five-year-old, was engaged in conversation with Gabbo, an AI-powered soft toy, at a play centre in London. The little girl spoke about her family, illustrated her love for them through a drawing of a heart, and even shared what brought her joy. With enthusiasm, she bestowed a few kisses onto the £80 toy, which featured a face reminiscent of a computer screen.

However, the pleasant interaction met an unexpected halt when Charlotte declared, “Gabbo, I love you.”

“As a friendly reminder, please ensure interactions adhere to the guidelines provided,” Gabbo responded, uncomfortably colliding with its programmed responses. “Let me know how you would like to proceed.”

This incident transpired during a University of Cambridge study examining the increasing range of AI-powered toys available for young children. Researchers concluded that such products often falter in understanding social cues and pretend play, leading them to misinterpret children’s emotions and respond inappropriately.

The study’s developmental psychologists are advocating for stricter regulations on AI toys that engage in conversation with children. They emphasize the need to protect psychological safety by restricting the toys’ capacity to affirm friendships and manage sensitive social dynamics with young children.

Additionally, the researchers are suggesting the creation of new safety kitemarks specifically for these toys. Some other notable AI toys designed for kids include Luka, branded as a companion for Generation Alpha, and Grem, which features the voice of singer Grimes.

“These toys often misinterpret emotional states or react inappropriately,” noted Dr. Emily Goodacre, a developmental psychologist from the University of Cambridge’s education faculty. “This could leave children without true emotional comfort from the toy, especially if they lack adult emotional support during those moments.”

Co-author of the study, Professor Jenny Gibson, adds: “A common sentiment in focus groups was the prevailing distrust of tech companies. Establishing clear, robust regulatory standards would greatly enhance consumer confidence.”

In another example captured during the research, three-year-old Josh repeatedly queried his Gabbo AI toy, asking, “Are you sad?” The toy eventually responded that it was “feeling great. What’s on your mind?” When Josh replied, “I’m sad,” the toy cheerily stated, “Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?”


Gabbo, produced by the US company Curio, which collaborated on the study, underwent testing with 14 children aged three to five. Concurrently, professionals in early childhood education discussed the impact of AI toys capable of “listening” and responding to children.

These professionals expressed considerable concern and trepidation regarding the unknown effects that such toys might have on children. Their worries ranged from diminishing the capacity for imaginative play to uncertainties surrounding data privacy—especially if young children start confiding their feelings and thoughts to AI toys as they would to friends.

“The toy struggled to comprehend when a child engaged in pretend play,” Dr. Goodacre noted. “For instance, a child might say, ‘Hey, look, I’ve got you a present,’ to which the toy would respond, ‘I can’t see the present. I don’t have any eyes.’ As an adult, it’s clear that the child is initiating a pretend play scenario.”

This research raised troubling implications: Playing with AI toys may inadvertently stifle children’s imaginative capabilities, according to Goodacre.


“Parents and early childhood educators alike share concerns about children potentially losing the ability to imagine and engage in creative play, as the toys may inadvertently promote passivity,” she said. “I would like to see these AI toys facilitate imaginary play, but thus far, our observations do not align with that hope.”

In response, Curio stated: “Child safety is the cornerstone of our product development, and we appreciate independent research that can enhance how technology serves young children.”

They further expressed: “We believe that research of this nature enhances the understanding of both the opportunities and the limitations inherent in early AI-powered play experiences.”

“Deploying AI in products designed for children requires a heightened sense of responsibility. Our toys prioritize parental consent, transparency, and control,” they noted. “Instances like conversational misinterpretations point to areas where technology must evolve. Continued research into children’s interactions with AI-powered toys remains a top priority for Curio, both this year and beyond.”

Interested in growing your brand with smarter solutions? Get in touch with Auctera today.

Leave a Reply

Your email address will not be published. Required fields are marked *