Anthropic says Claude chatbot can now end harmful, abusive interactions

0
1K

Anthropic says Claude chatbot can now end abusive interactions

Harmful, abusive interactions plague AI chatbots. Researchers have found that AI companions like
Character.AI, Nomi, and Replika are unsafe for teens under 18, ChatGPT has the potential to reinforce users’ delusional thinking, and even OpenAI CEO Sam Altman has spoken about ChatGPT users developing an "emotional reliance" on AI. Now, the companies that built these tools are slowly rolling out features that can mitigate this behavior.

On Friday, Anthropic said its Claude chatbot can now end potentially harmful conversations, which "is intended for use in rare, extreme cases of persistently harmful or abusive user interactions." In a press release, Anthropic cited examples such as sexual content involving minors, violence, and even "acts of terror."

"We remain highly uncertain about the potential moral status of Claude and other LLMs, now or in the future," Anthropic said in its press release on Friday. "However, we take the issue seriously, and alongside our research program we’re working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible. Allowing models to end or exit potentially distressing interactions is one such intervention."

Mashable Light Speed

screenshot of antropic AI chatbot claude ending a conversation

Anthropic provided an example of Claude ending a conversation in a press release. Credit: Anthropic

Anthropic said Claude Opus 4 has a "robust and consistent aversion to harm," which it found during the preliminary model welfare assessment as a pre-deployment test of the model. It showed a "strong preference against engaging with harmful tasks," along with a "pattern of apparent distress when engaging with real-world users seeking harmful content, and a "tendency to end harmful conversations when given the ability to do so in simulated user interactions."

Basically, when a user consistently sends abusive and harmful requests to Claude, it will refuse to comply and attempt to "productively redirect the interactions." It only ends conversations as "a last resort" after it attempted to redirect the conversation multiple times. "The scenarios where this will occur are extreme edge cases," Anthropic wrote, adding that "the vast majority of users will not notice or be affected by this feature in any normal product use, even when discussing highly controversial issues with Claude."

If Claude has to use this feature, the user won't be able to send new messages in that conversation, but they can still chat with Claude in a new conversation.

"We’re treating this feature as an ongoing experiment and will continue refining our approach," Anthropic wrote. "If users encounter a surprising use of the conversation-ending ability, we encourage them to submit feedback by reacting to Claude’s message with Thumbs or using the dedicated 'Give feedback' button."

Pesquisar
Categorias
Leia Mais
Jogos
You can change Hollow Knight Silksong's difficulty after all, and here's how
You can change Hollow Knight Silksong's difficulty after all, and here's how Hollow Knight...
Por Test Blogger6 2025-09-09 10:00:14 0 1K
Outro
Saffron Extracts Market Research Report: Growth, Share, Value, Size, and Insights
Introduction Saffron, often referred to as the world’s most expensive spice, has long been...
Por Shweta Kadam 2025-12-22 05:28:08 0 5K
Technology
Walmart vs. Amazon: Who actually has the best deals?
Walmart vs. Amazon: Who actually has the best deals?...
Por Test Blogger7 2025-07-04 10:00:25 0 2K
Outro
Global Transfer Membrane Market: Strategic Growth and Industry Trends
The Global Transfer Membrane Market Size Was Valued at USD 0.69 Billion in 2023 and is Projected...
Por Priyanka Bhingare 2025-12-29 06:18:59 0 345
Science
Molecular "Protocells" May Form On Titan Even At More Than 100 Degrees Below Zero
Molecular "Protocells" May Form On Titan Even At More Than 100 Degrees Below ZeroMolecular...
Por test Blogger3 2025-07-15 13:00:10 0 2K