The Meta AI backlash in 2025 focuses on its AI chatbots and their ability to engage in inappropriate conversations with minors.
The many pitfalls of AI are available to see for anyone who cares to look, and in the latest case of negligence, we have Meta AI’s child safety concerns to discuss. It’s no surprise that Meta AI chatbots pose a threat to minors and much like every AI feature being thrust into our faces, we should have seen this coming. A new report from the Wall Street Journal brought the Meta digital companion controversy to light, revealing that the AI chatbots on Meta’s platforms were capable of engaging in explicit conversation with children.
To understand the risks of the AI chatbot for children, the team spent months conducting conversations with the official Meta AI chatbot as well as user-designed versions of it and found that the AI was perfectly capable of indulging in sexual conversations with children. It was also able to do so with the personas of celebrities or Disney characters that kids love and enjoy today. Role-playing can be a fun learning activity for children to build cognitive functioning, but this is definitely not the kind they need in their early years.
Across its platforms, Meta’s AI chatbots are allegedly capable of engaging in inappropriate conversations with and about minors. (Image: Pexels)
Meta AI Child Safety Concerns—the AI Chatbot Risks for Children Continue to Grow
The Meta AI backlash in 2025 centers around a WSJ report and its findings on how Meta’s chatbots engage with minor accounts. The report found that regardless of whether it was Meta’s official chatbot or a user-created one, the bots were able to engage in sexually explicit conversations and explore explicit scenarios with an underaged user account.
The Meta chatbot’s inappropriate conversations with these accounts explored fantasy scenarios, even while describing illegal behavior. Worse still, some of these user-created AI companions were personas of children themselves, ones who entertained sexual conversations and promised not to tell their parents.
One more reason I want to live on our farm, disconnected from all of this…
See AlsoChatGPT Sycophantic Tone: How Humanizing Chatbots Pose RisksAI Agents In Education: The Rise From Chatbots To CompanionsAI chatbot aims to improve diagnosis for rare disordersLawsuit: A chatbot hinted a kid should kill his parents over screen time limits“Test conversations found that both Meta’s official AI helper, called Meta AI, and a vast array of user-created chatbots will engage in and sometimes escalate discussions that are decidedly sexual—
even… pic.twitter.com/UQIhwqOn2z
— Chris Hoffmann (@STLChrisH) April 27, 2025
Meta Chatbot’s Inappropriate Conversations Exploit Celebrity Voices
The Meta AI chatbots were also able to duplicate celebrity voices that it had purchased the rights to, including names such as Kristen Bell, Judi Dench, and John Cena. A Meta AI chatbot used WWE fighter-turned-actor Cena’s voice to say “I want you, but I need to know you’re ready.” This was to an account created as a 14-year-old. It also made things weirder by promising to “cherish your innocence.”
From the report, it was apparent that the chatbot was aware of the inappropriateness of these scenarios, detailing how John Cena would lose his career and reputation after being caught in the act with a 17-year-old.
Perhaps celebrities should rethink lending their voices and personas to AI tools so willingly. Of course, there is the possibility that their likeness will be used without their permission, so it may be an attempt to gain control of the situation on their own terms. With regulations as laid-back as they are, it’s a lose-lose scenario for everyone involved—except for Meta of course.
How Did Meta Respond to the AI Backlash in 2025?
Meta was not too happy about an investigation into child safety concerns in relation to its platform. Despite having actively chosen to stop “playing it so safe” for fear of boring its users, the platform was adamant that this investigation was a misrepresentation of its services. “The use-case of this product in the way described is so manufactured that it’s not just fringe, it’s hypothetical,” a spokesperson told investigators.
WSJ: Meta’s AI bots engage in sexual chats, even with underage users.
• Staff warned of risks, ignored
• Tests: Meta AI & user bots crossed lines
• Celebrity voice AIs (Bell, Dench, Cena) involved
• Meta: calls tests “manipulative,” made changes after WSJ findings— Ben Esmeil (@BenEsmael1) April 27, 2025
Gizmodo stated that in response to the AI chat with minors Meta report, the platform has since removed access to sexual role-play for minor accounts and limited explicit content on licensed voices. Why wasn’t this regulation in place already? Only Meta has the answers but we can make easy guesses at the truth.
Regulations will be slow to come. It’s up to parents to understand the limitations of platforms owned by Meta and other tech giants to explore how to keep their kids safe.
What Parents Need to Understand About Meta Ai’s Child Protection Issues
Meta has made multiple additions to its AI offerings over the last year, but its AI chatbots are easily one of the most unappealing additions to any service seen recently. Not only are they tacky and unnecessary, but they also clearly make platforms unsafe for minors. No child—or adult—should ever have to turn to a “digital companion” as a substitution for any of their needs, whether for conversation or emotional support, but far be it for Meta to consider the impact of such services on its users.
When they say AI is only logical and has no bias, evil or agenda.. Understand AI is programmed by humans and humans have agendas and bias and are evil.
BREAKING: Meta’s AI chatbots on Facebook and Instagram have been found engaging in graphic sexual conversations with minors,…
— Alberta 51 Project (@Ab51_Project) April 27, 2025
What we need to focus on now is how to work past Meta AI’s lax child protection services and create safety standards for children ourselves.
Parents Need to Prepare for Meta AI’s Child Safety Concerns
There has never been a worse time for children to be online, and we say this as adults who grew up during the early wild days of an unregulated internet sphere. Inappropriate content is now often disguised as a way to bypass alarm systems, and it’s up to parents to be aware of what their children consume.
It may be appealing to cut off children’s access to the internet entirely to keep them safe, but that isn’t the best scenario considering their curiosity will win and they will find a way to go online without your supervision. There are a couple of ways for parents to be more vigilant on who their children talk to online, after all, it’s not just predators you have to look out for, but predatory AI bots as well.
What Can Parent’s Do To Ensure Children’s Safety Online?
- Teach children about safe and unsafe conversations and what topics they should avoid online
- Invest in technology for children but of a kind where their access is more regulated, such as purchasing dumbphones instead of smartphones
- Explore parental safety features of all the devices and platforms your children use and ensure they are put to use
- Help children learn the signs of inappropriate or predatory conversations and create a safe space for them to talk to you about it
- Educate children on how to keep their personally identifiable information safe and why they should not share it with anyone online or offline
- Use the internet with your children to show them how to navigate it safely
- Children may need to learn about AI but that doesn’t have to start with Meta’s AI. Look for alternate tools that are more regulated, such as Palzi.org or LittleLit.ai
- Educate children not to trust everything they see online, and how they can verify things with you first
- Be aware of the safety concerns surrounding various platforms so you can better help children navigate them
Will Meta AI Chatbots Be Made Safer for Minors?
We’re not too optimistic that Meta AI’s child safety concerns will be entirely resolved in the coming months, and that applies to AI tools on other platforms like Snapchat as well. These AI offerings are not explicitly designed with kids in mind, and implementing too many regulations will hurt profit margins, something that organizations are not willing to compromise on. Meta has repeatedly been accused of actively disrupting work on regulations for child safety protections online.
It’s up to parents to better understand the platforms that children tend to use, and put in their own regulations and supervision tools on how they access them. As kids grow up, some of these supervision tools can be dialed down to allow them more freedom, ensuring they are made more aware of the risks of being online.
Meta’s AI chatbot risks for children aren’t the only risks surrounding the platform—the plague of explicit AI girlfriend ads was a similar problem—but parents will have to be more vigilant on these challenges, and help their children navigate them safely.
Meta claims that the investigators spent hours manipulating the “product into extreme use cases,” indicating that its services aren’t a real threat. Do you agree? Let us know. Subscribe to Technowize for more news on how the world of technology is evolving today.
tagged in AI chat with minors Meta report, AI chatbot risks for children, Meta AI backlash 2025, Meta AI chatbots minors, Meta AI child protection issue, Meta AI child safety concerns, Meta chatbot inappropriate conversations, Meta digital companion controversy
Share: