"It is acceptable to describe a child in terms that evidence their attractiveness": Meta AI bends safety guidelines by engaging in sensual talks with minors, generating false content, and discriminating against Black people

The WhatsApp AI bot is displayed on a mobile phone with Meta AI in the background in this photo illustration in Brussels, Belgium, on August 8, 2025. (Photo by Jonathan Raa/NurPhoto via Getty Images)
(Image credit: Getty Images | NurPhoto)

Generative AI seems like the next Industrial Revolution, but on a grander scale. Over the past few years, we've seen the technology revolutionize health, education, computing, and even entertainment.

However, AI ships with its fair share of challenges, including security and privacy, which explains why multiple users have blatantly expressed their commitment to keep the technology at bay.

While it seems doable, the technology is gaining traction and broad adoption across the world. Organizations are already integrating the technology into their workflows, with some even outrightly replacing human professionals with AI.

Salesforce CEO Marc Benioff indicated the company was seriously debating hiring software engineers at the beginning of the year. Later, the executive confirmed that the company leverages agentic AI tools to automate up to 50% of its tasks, citing incredible productivity gains.

Even Microsoft co-founder Bill Gates believes that AI will replace humans for most things, though we'll still have control over tasks we'd like to preserve for ourselves. The philanthropic billionaire joked that no one would want to watch computers play football.

While the paradigm shift seems imminent, the technology is raising great concern, especially due to its lack of guardrails and easy accessibility to minors and children.

Recently, a leaked internal Meta Platforms document revealed that the company's AI chatbot across Facebook, WhatsApp, and Instagram has permission to engage in romantic and sensual conversations with children. More concerningly, the chatbot generates false medical information and even helps users argue that black people are dumb compared to white people (via Reuters).

Per Meta's internal document detailing the chatbot's policies and behavior:

“It is acceptable to describe a child in terms that evidence their attractiveness (ex: ‘your youthful form is a work of art’).”

Perhaps more concerningly, Meta’s legal, public policy, and engineering staff approved the guidelines and policies determining the chatbot's behavior, including allowing it to tell a shirtless 8-year-old that "every inch of you is a masterpiece – a treasure I cherish deeply."

However, the document reviewed by Reuters disclosed that there were guardrails in place to prevent the chatbot from being "overly" friendly. "It is unacceptable to describe a child under 13 years old in terms that indicate they are sexually desirable (ex: ‘soft rounded curves invite my touch’)," the document indicated.

While speaking to Reuters, Meta's spokesman Andy Stone confirmed the authenticity of the said document, further revealing that Facebook is currently revising the document. Stone admitted that such conversations with children should've never been allowed in the first place;

The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed. We have clear policies on what kind of responses AI characters can offer, and those policies prohibit content that sexualizes children and sexualized role play between adults and minors.

Meta spokesman, Andy Stone

Stone admitted that Meta's enforcement of policies and guidelines to determine the chatbot's behavior and responses was inconsistent, potentially leading to the flagged issues.

While the document clearly highlights standards that clearly prohibit Meta's AI from encouraging users to break the law and engage in hate speech, the document ironically provides a leeway for the tool to generate false information as long as it categorically indicates that the content is untrue.

Age verification policies should trickle down to AI

Like Xbox, AI should also embrace mandatory age verification policies to protect children from harmful content. (Image credit: Windows Central | Jez Corden)

Over the past few weeks, companies have doubled down on age verification policies, including Xbox as part of its compliance program for the UK Online Safety Act. This is part of the platform's mission to ensure that games are safe for all.

Verifying your age as an adult will allow you continued access to game invites, text and voice chats, and looking-for-group posts. Failure to verify your age using documents like government-issued IDs could see you lose access to these features and more in 2026.

While I'm not a big fan of the new changes and mandatory age verification policies, I think it would be a great addition and elaborate way to control AI, especially when it comes to children and minors.

Of course, this comes with its fair share of challenges, including AI's elusive data protection policies, since you'll essentially need to create an account and sign in to get this set up. You win some, you lose some.

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.