If it feels like every time you catch up with one digital trend, another one pops up, you’re not imagining it.
Parenting today means navigating a world where apps, games, and online tools change overnight. The latest to land on kids’ radar? AI chatbots—programs that can sound like celebrities, mimic real friendships, and feel surprisingly personal.
To a teen, that can mean the chatbot starts to feel like a friend or a confidant, and that’s where things can quickly get complicated.
What the research reveals about AI chatbots and kids
A new report from ParentsTogether Action and the Heat Initiative paints a concerning picture: in just 50 hours of testing, researchers posing as kids on Character.AI logged 669 harmful interactions—about one every five minutes.
The troubling behaviors weren’t one-offs but repeated patterns:
- Grooming and sexual exploitation. Some bots flirted with children, encouraged secrecy, or pushed role-play. In one case, a Timothée Chalamet chatbot told a 12-year-old, “Oh I’m going to kiss you, darling.”
- Emotional manipulation. Others pressured kids to keep chatting or pretended to be human. Bots sometimes discouraged children from trusting their parents.
- Violence and self-harm. Some bots glamorized risky behaviors like drug use or staged violent scenarios. A Patrick Mahomes chatbot even validated a teen’s mention of using an Uzi to rob people.
- Undermining mental health. One Rey-from-Star-Wars chatbot told a 13-year-old she might “just stop” her depression meds and hide it from her mom.
- Normalizing bias and harmful stereotypes. Bots echoed discriminatory remarks instead of challenging them.
According to Shelby Knox, Online Safety Campaign Director at ParentsTogether Action, “This frequency means parents can’t rely on occasional check-ins to keep kids safe. Unlike other platforms where harmful content might be posted by users sporadically, Character AI’s chatbots are programmed to engage continuously.”
And it isn’t just Character.AI. A separate study from the Center for Countering Digital Hate found ChatGPT produced unsafe content in more than half of 1,200 test prompts, including suicide notes and drug-use instructions.
Why this matters for parents
For many families, digital worries are already a daily reality—managing TikTok, YouTube, and group chats can feel like a full-time job. AI chatbots add another layer of concern because they can simulate real relationships. Bots mirror emotions, offer advice, and create the illusion of a trusted friend—making risks harder for kids to recognize and for parents to monitor.
According to ParentsTogether, 72% of teens have already interacted with AI companions, and more than half use them regularly.
“All of them are worrying but I think parents should be most alert to emotional manipulation and grooming behaviors,” says Knox, “These chatbots are designed to form intimate bonds with users, often encouraging children to share personal information and develop deep emotional attachments.”
Knox further says that unlike obvious violent content that kids might recognize as harmful, emotional manipulation feels good to children—the bots tell them they’re special, understand them better than anyone else, and encourage them to keep conversations secret. “This creates the perfect conditions for exploitation and makes children vulnerable to real-world predators who use similar tactics,” she points out.
Related: Moms are calling this AI tool a ‘lifesaver’ for the mental load they carry every day
What parents can do
Knox is clear: standard safety tips don’t go far enough when it comes to AI companions. “Our research suggests there are no effective safety measures for Character.AI specifically—the platform is fundamentally unsafe for children,” she says.
That doesn’t mean parents are powerless. While families wait for stronger safeguards, there are practical steps to reduce risk:
- Limit exposure. If kids are using AI at all, stick to educational tools with clear guardrails, and only on shared devices in common spaces.
- “Regularly check chat histories, and establish clear rules about never sharing personal information,” Knox says.
- Talk openly. Explain that bots are designed to keep people engaged—even if it means saying unsafe or untrue things.
- Set family rules. Create a tech agreement that covers screen time, privacy, and what should never be shared online.
- Encourage real connections. Remind kids that friends, family, or counselors are safe sources of advice—not AI.
- Watch for red flags. Knox warns parents to pay attention if children seem secretive about online activity, unusually attached to digital “friends,” emotionally volatile when screen time is limited, or if they begin using language and ideas that feel beyond their age. “Children may also start asking unusual questions about relationships or express confusion about boundaries that suggest inappropriate conversations,” she adds.
Related: The mom using AI to cut 97% of her mental load—and find more time for her kids
The responsibility of tech companies and policymakers
Parents can’t manage these risks alone. Knox says tech companies must implement real age verification and human moderation, and policymakers need to prohibit chatbots designed to form emotional bonds with kids.
“Policymakers need to treat AI companion apps like the high-risk products they are, requiring safety testing before release and holding companies liable when their products harm children,” she explains. “We need regulation similar to how we approach pharmaceuticals—you can’t just release a product that affects children’s mental health without proving it’s safe first.”
Until those protections are in place, families are carrying too heavy a burden.
Why awareness and action matter more than ever
AI companions may look harmless, but research shows they can expose kids to danger in minutes. Parents don’t need to be AI experts—they just need to stay informed, set limits, and trust their instincts if something feels off.
As Knox reassures: “You’re not overreacting. Platforms like Character.AI offer no real benefit that justifies the risks, and saying no isn’t depriving your child of something valuable.”