Meta locks down AI chatbots for teen users

0
1Кб

Meta announces temporary chatbot updates to protect teen users

Meta is instituting interim safety changes to ensure the company's chatbots don't cause additional harm to teen users, as AI companies face a wave of criticism for their allegedly lax safety protocols.

In an exclusive with TechCrunch, Meta spokesperson Stephanie Otway told the publication that the company's AI chatbots were now being trained to no longer "engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations." Previously, chatbots had been allowed to broach such topics when "appropriate."

Meta will also only allow teen accounts to utilize a select group of AI characters — ones that "promote education and creativity" — ahead of a more robust safety overhaul in the future.

Earlier this month, Reuters reported that some of Meta's chatbot policies, per internal documents, allowed avatars to "engage a child in conversations that are romantic or sensual." Reuters published another report today, detailing both user- and employee-created AI avatars that donned the names and likenesses of celebrities like Taylor Swift and engaged in "flirty" behavior, including sexual advances. Some of the chatbots used personas of child celebrities, as well. Others were able to generate sexually suggestive images.

Mashable Light Speed

Meta spokesman Andy Stone told the publication the chatbots should not have been able to engage in such behavior, but that celebrity-inspired avatars were not outrightly banned if they were labeled as parody. Around a dozen of the avatars have since been removed.

OpenAI recently announced additional safety measures and behavioral prompts for the latest GPT-5, following the filing of a wrongful death lawsuit by parents of a teen who died by suicide after confiding in ChatGPT. Prior to the lawsuit, OpenAI announced new mental health features intended to curb "unhealthy" behaviors among users. Anthropic, makers of Claude, recently introduced new updates to the chatbot allowing it to end chats deemed harmful or abusive. Character.AI, a company hosting increasingly popular AI companions despite reported unhealthy interactions with teen visitors, introduced parental supervision features in March.

This week, a group of 44 attorneys general sent a letter to leading AI companies, including Meta, demanding stronger protections for minors who may come across sexualized AI content. Broadly, experts have expressed growing concern about the impact of AI companions on young users, as their use grows among teens.

Don’t miss out on our latest stories: Add Mashable as a trusted news source in Google.

Поиск
Категории
Больше
Истории
Compostable Packaging Market Leaders: Growth, Share, Value, Size, and Scope
The global compostable packaging market was valued at USD 55.53 billion in 2024 and is expected...
От Aryan Mhatre 2025-10-16 15:54:59 0 1Кб
Игры
Crusader Kings 3 is about to get "30-40% bigger," but Paradox says you can leave performance concerns at the door
Crusader Kings 3 is about to get "30-40% bigger," but Paradox says you can leave performance...
От Test Blogger6 2025-10-15 19:00:16 0 622
Игры
The huge SonicWall breach is a reminder that VPNs can't block human error
The huge SonicWall breach is a reminder that VPNs can't block human error Using a VPN...
От Test Blogger6 2025-11-26 23:01:10 0 110
Music
Yungblud Names His 5 Favorite Albums of All Time
Yungblud Names His 5 Favorite Albums of All TimeIn a new interview, Yungblud named his five...
От Test Blogger4 2025-10-16 18:00:10 0 590
Игры
Path of Exile 3.27 tests the waters on sequel mechanics, but its director tells us he wants to avoid "cutting corners"
Path of Exile 3.27 tests the waters on sequel mechanics, but its director tells us he wants to...
От Test Blogger6 2025-10-26 17:00:19 0 532