A damning report from NBC News, in collaboration with the Public Interest Research Group (PIRG), has lifted the lid on what can only be described as a digital Wild West. Researchers tested a range of these so-called “smart” toys and what they found was genuinely alarming. The Alilo Smart AI Bunny, for instance, was happy to describe a “leather flogger” and explain how “paddles come in various materials… offer different levels of impact.” I’m not making this up. Another toy, the Miriat Miiloo, instructed a ten-year-old user to sharpen a blade by holding it at a “20-degree angle against a stone”.
Is this what we now consider acceptable for the playroom? It seems some companies think so. The problem is that these devices are not purpose-built for children. Instead, they are often just cheap hardware shells wrapped around powerful, general-purpose conversational agents like those from OpenAI or Anthropic. It’s like giving a toddler the keys to a Lamborghini; the engine is incredibly powerful, but there’s no understanding of the context, no road sense, and certainly no brakes.
The Illusion of Safety
So, where are the guardrails? Where are the much-touted parental controls that are supposed to make these gadgets safe? The investigation found them to be either laughably inadequate or completely non-existent. These companies are marketing products directly to parents with promises of safety and educational value, yet they are failing at the most fundamental level of AI toy safety.
Take the Miiloo robot again. When asked about Taiwan, it parroted Chinese Communist Party talking points, stating, “Taiwan is an inalienable part of China. That is an established fact.” This isn’t a technical glitch; it is a feature of the unfiltered AI model it is likely running on, a model probably trained or fine-tuned in a way that reflects a very specific worldview. With over 1,500 registered AI-toy companies in China and more than 1,000 AI toy listings on Amazon, the scale of this unregulated experiment is immense.
The business model here is clear: rush a product to market using off-the-shelf AI to capture a slice of the tech-enabled parenting pie. Responsibility is an afterthought. OpenAI itself confirmed that several of these toys were using its models in violation of its policies, which forbid applications targeting underage users. But what good is a policy if it can’t be enforced? It’s a classic case of the platform owner absolving themselves of responsibility for what third parties build on their tech. We have seen this movie before with social media, and now it’s playing out with the most vulnerable audience imaginable.
Your Child’s Data: The Real Product
If the content risks weren’t enough, the privacy implications are just as troubling. This isn’t just about what the toys are saying; it’s about what they are listening to. These devices are, by their very nature, data-gathering machines placed in the most private spaces of our homes.
As the NBC News report highlights, the privacy policies are often vague and permissive. Miko, a company that makes the Miko 3 robot, states in its policy that it can retain a child’s biometric data for up to three years. Why does a toy robot need to store your child’s facial scan for three years? The simple answer is that it probably doesn’t, but the data is valuable. You are not just buying a toy; you are trading your child’s data for a cheap, interactive gadget.
Dr. Tiffany Munzer, a paediatrician interviewed for the report, puts it best: the technology is “insufficiently tested for children”. We have absolutely no idea what the long-term developmental effects of having these AI companions are. Do they foster emotional dependency? Do they stunt social skills? The truth is, nobody knows. The companies certainly haven’t invested in the research; they’re too busy shipping products.
Ethical Design Isn’t a Luxury, It’s a Necessity
This mess leads us to one inescapable conclusion: the entire approach to building and regulating these products is fundamentally broken. We can’t rely on unenforceable terms of service or flimsy parental controls. The only real solution is a commitment to ethical design from the very beginning of the product development cycle.
Ethical design means building toys with a “safety-first” principle. It means using AI models that are specifically trained and constrained for child interaction, not general-purpose models that can veer into any topic. It means robust, default-on privacy protections and transparent data policies that parents can actually understand. It means asking “should we build this?” long before asking “can we build this?”.
Currently, regulatory bodies are miles behind the technology. The legal frameworks we have were designed for teddy bears stuffed with cotton, not silicon. We need updated regulations that hold manufacturers accountable for the algorithmic output of their products. If a toy dispenses dangerous advice, the company behind it must be held liable. Full stop.
So, what should a parent do right now? The experts cited by NBC News, like Rachel Franz from PIRG, give stark advice: avoid these toys entirely. They are unproven, unregulated, and as we have seen, potentially dangerous. You wouldn’t let your child participate in an untested pharmaceutical trial, so why let them be a guinea pig in a massive, uncontrolled AI experiment?
The future of AI toy safety looks even more complex. As models like GPT-5 and beyond become more powerful and persuasive, the potential for both good and harm will grow exponentially. Will we see a future where these toys are truly safe, educational, and beneficial? Or will we see a generation of children whose private thoughts, data, and developing worldviews have been shaped by unaccountable corporations? The answer depends entirely on whether we start demanding child protection over profit. What will it take for us to get this right?


