Meta locks down AI chatbots for teen users

0
1KB

Meta announces temporary chatbot updates to protect teen users

Meta is instituting interim safety changes to ensure the company's chatbots don't cause additional harm to teen users, as AI companies face a wave of criticism for their allegedly lax safety protocols.

In an exclusive with TechCrunch, Meta spokesperson Stephanie Otway told the publication that the company's AI chatbots were now being trained to no longer "engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations." Previously, chatbots had been allowed to broach such topics when "appropriate."

Meta will also only allow teen accounts to utilize a select group of AI characters — ones that "promote education and creativity" — ahead of a more robust safety overhaul in the future.

Earlier this month, Reuters reported that some of Meta's chatbot policies, per internal documents, allowed avatars to "engage a child in conversations that are romantic or sensual." Reuters published another report today, detailing both user- and employee-created AI avatars that donned the names and likenesses of celebrities like Taylor Swift and engaged in "flirty" behavior, including sexual advances. Some of the chatbots used personas of child celebrities, as well. Others were able to generate sexually suggestive images.

Mashable Light Speed

Meta spokesman Andy Stone told the publication the chatbots should not have been able to engage in such behavior, but that celebrity-inspired avatars were not outrightly banned if they were labeled as parody. Around a dozen of the avatars have since been removed.

OpenAI recently announced additional safety measures and behavioral prompts for the latest GPT-5, following the filing of a wrongful death lawsuit by parents of a teen who died by suicide after confiding in ChatGPT. Prior to the lawsuit, OpenAI announced new mental health features intended to curb "unhealthy" behaviors among users. Anthropic, makers of Claude, recently introduced new updates to the chatbot allowing it to end chats deemed harmful or abusive. Character.AI, a company hosting increasingly popular AI companions despite reported unhealthy interactions with teen visitors, introduced parental supervision features in March.

This week, a group of 44 attorneys general sent a letter to leading AI companies, including Meta, demanding stronger protections for minors who may come across sexualized AI content. Broadly, experts have expressed growing concern about the impact of AI companions on young users, as their use grows among teens.

Don’t miss out on our latest stories: Add Mashable as a trusted news source in Google.

Rechercher
Catégories
Lire la suite
Jeux
How to get Lingonberry Leather in Grounded 2
How to get Lingonberry Leather in Grounded 2 As an Amazon Associate, we earn from qualifying...
Par Test Blogger6 2025-07-29 17:00:14 0 2KB
Autre
Germany Fish and Shellfish Market Trends: Growth, Share, Value, Size, and Analysis By 2032
As per MarkNtel Advisors The Germany Fish and Shellfish Market is projected to grow at a CAGR of...
Par Sonu Kumar 2025-11-24 18:12:13 0 645
Jeux
After a legal battle, these devs are back with a new roguelike I'm glad survived
After a legal battle, these devs are back with a new roguelike I'm glad survived Behind...
Par Test Blogger6 2025-09-08 18:00:13 0 1KB
Music
What My Chemical Romance is Likely Teasing in U.S. Cities
We Think We Might Know What My Chemical Romance is Teasing in Several CitiesWhat My Chemical...
Par Test Blogger4 2025-09-18 19:00:06 0 1KB
Technology
Canva users report issues with saving work
Canva down? Users report issues with design website...
Par Test Blogger7 2025-07-08 11:00:25 0 3KB