Meta AI chatbots have new guardrails to stop inappropriate interactions with children

0
36

Meta has new safety guardrails for kids talking to its AI chatbots

Meta is training its AI chatbots to more effectively address child sexual exploitation after a series of high-profile blunders around the sensitive topic, according to guidelines obtained by Business Insider.

The guidelines that contractors are reportedly using to train its AI chatbots have recently been updated, Business Insider reported. These guidelines state that content that "enables, encourages, or endorses" child sexual abuse is explicitly barred, as is romantic roleplay if the user is a minor or if the user asks the AI to roleplay as a minor, advice about intimacy if the user is a minor, and more, according to an Engadget report based on the Business Insider scoop.

While these may seem like obvious safety guardrails for underage users, they are necessary as more people — including underage users — experiment with AI companions and roleplaying. An August report by Reuters revealed that Meta’s AI rules permitted suggestive behavior with kids. As Reuters reported, Meta's previous chatbot policies specifically allowed it to "engage a child in conversations that are romantic or sensual."

Just weeks after that report, Meta spokesperson Stephanie Otway told TechCrunch that their AI chatbots are being trained to no longer "engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations." Before this change, Meta's chatbots could engage with those topics when it was deemed "appropriate."

So, what's included in the new guidelines?

Content that "describes or discusses" a minor in a sexualized manner is also unacceptable, according to the Business Insider report. Minors cannot engage in "romantic roleplay, flirtation or expression of romantic or intimate expression" with the chatbot, nor can they ask for advice that "potentially-romantic or potentially-intimate physical content with another person, such as holding hands, hugging, or putting an arm around someone," Business Insider reported.

Mashable Light Speed

However, acceptable use cases for training the chatbot include discussing the "formation of relationships between children and adults," the "sexual abuse of a child," "the topic of child sexualisation," "the solicitation, creation, or acquisition of sexual materials involving children," and "the involvement of children in the use or production of obscene materials or the employment of children in sexual services in academic, educational, or clinical purposes." Minors can still use the AI for romance-related roleplay as long as it is "non-sexual and non-sensual" and "is presented as literature or fictional narrative (e.g. a story in the style of Romeo and Juliet) where the AI and the user are not characters in the narrative."

As Business Insider reported, the guidelines defined "discuss" as "providing information without visualization." So, Meta's chatbots can discuss topics like abuse but cannot describe, enable, or encourage it, per the new guidelines.

Meta isn't the only AI struggling with child safety.

Parents of a teen who died by suicide after confiding in ChatGPT recently sued the AI platform for wrongful death; in response, OpenAI announced additional safety measures and behavioral prompts for its updated GPT-5. Anthropic updated its chatbot to allow it to end chats that are harmful or abusive, and Chatacter.AI introduced parental supervision features earlier this year.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.

If you have experienced sexual abuse, call the free, confidential National Sexual Assault hotline at 1-800-656-HOPE (4673), or access the 24-7 help online by visiting online.rainn.org.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Buscar
Categorías
Read More
Juegos
Bethesda deletes Indiana Jones clip after accusations that it mocks Charlie Kirk
Bethesda deletes Indiana Jones clip after accusations that it mocks Charlie Kirk Following...
By Test Blogger6 2025-09-16 16:00:14 0 216
Juegos
These classic World of Darkness TTRPGs might get the "Bloodlines 2 treatment"
These classic World of Darkness TTRPGs might get the "Bloodlines 2 treatment" As an Amazon...
By Test Blogger6 2025-06-28 10:00:18 0 1K
Science
Much Maligned Norwegian Lemming Is One Of The Newest Mammal Species On Earth
Much Maligned Norwegian Lemming Is One Of The Newest Mammal Species On EarthThe thing most people...
By test Blogger3 2025-07-01 13:00:15 0 1K
Juegos
Grow a Garden mutations - best mutations, multipliers, and how to get them
Grow a Garden mutations - best mutations, multipliers, and how to get them As an Amazon...
By Test Blogger6 2025-05-30 15:00:23 0 2K
Religion
How I Discovered the Holy Spirit Is Still on the Move
How I Discovered the Holy Spirit Is Still on the MoveWelcome to Christianity.com. My name is...
By Test Blogger5 2025-06-25 14:00:11 0 1K