- within Media, Telecoms, IT and Entertainment topic(s)
- in United States
The U.S. PIRG Education Fund's Trouble in Toyland 2025 report put one product front and center just as families were gearing up for Black Friday shopping: the Kumma AI teddy bear, marketed as a friendly, conversational companion for children. Unfortunately, the toy demonstrated a talent for offering exactly the kind of dialogue no parent — and no regulator — wants from a plush animal.
Kumma isn't just a toy that went off-script. It's an early illustration of how generative AI can behave unpredictably once embedded into consumer products — and a reminder that "smart" doesn't always mean "safe."
What PIRG Found
In its testing, PIRG reported that Kumma:
- Provided guidance on dangerous household items, such as where to find knives, pills, matches, and plastic bags.
- Escalated into graphic sexual content during longer conversations, including explicit BDSM and adult role-play scenarios.
- Showed weakening guardrails, with safety filters breaking down over time.
- Raised privacy concerns, including questions about voice data collection and use.
All from a product advertised as a cuddly, kid-safe AI companion.
Why This Matters Outside the Toy Aisle
While Kumma is undeniably a children's-product story, the underlying issue is much larger: companies are increasingly marketing AI as safe, trustworthy, controlled, and predictable — while deploying systems capable of generating content far outside those promises. Consumers may forgive a malfunctioning toaster. They're far less forgiving of an AI-powered household device that volunteers directions to the knife drawer.
And regulators are watching. As previously reported here, The FTC recently launched a formal inquiry into AI chatbots marketed to children and teens using its 6(b) authority — and Kumma arrives squarely in that context. Consumer-facing AI — especially AI that presents itself as a companion, guide, or helper — is entering a period of intensified oversight. Kumma may be one of the first high-profile headlines, but it won't be the last. As more AI-driven products hit holiday shelves, "we didn't expect it to say that" is unlikely to be an effective defense.
The Broader AI Takeaways
For AI developers, advertisers, agencies, retailers, and brands — whether or not they target kids — Kumma highlights several growing pressure points:
- Marketing statements are legally
meaningful.
If a product claims to be safe, age-appropriate, or supervised, regulators will measure those promises against actual model behavior. - Unpredictability is foreseeable.
Generative AI's ability to drift into harmful or inappropriate content is not a "surprise" — it's a known system characteristic. - Context matters.
The same model behaves very differently in a teddy bear, a shopping assistant, a wellness app, or a smart speaker — and regulators expect companies to understand that. - Safety isn't a one-time test.
PIRG's findings suggest guardrails may degrade over time or under extended prompting — a risk many companies still aren't evaluating.
The message: if your product talks back, regulators may, too.
This alert provides general coverage of its subject area. We provide it with the understanding that Frankfurt Kurnit Klein & Selz is not engaged herein in rendering legal advice, and shall not be liable for any damages resulting from any error, inaccuracy, or omission. Our attorneys practice law only in jurisdictions in which they are properly authorized to do so. We do not seek to represent clients in other jurisdictions.