OpenAI doubles down on ChatGPT safeguards as it faces wrongful death lawsuit

0
2χλμ.

OpenAI details future safety plans for ChatGPT after allegedly facilitating a teen's death

OpenAI reiterated existing mental health safeguards and announced future plans for its popular AI chatbot, addressing accusations that ChatGPT improperly responds to life-threatening discussions and facilitates user self-harm.

The company published a blog post detailing its model's layered safeguards just hours after it was reported that the AI giant was facing a wrongful death lawsuit by the family of California teenager Adam Raine. The lawsuit alleges that Raine, who died by suicide, was able to bypass the chatbot's guardrails and detail harmful and self-destructive thoughts, as well as suicidal ideation, which was periodically affirmed by ChatGPT.

ChatGPT hit 700 million active weekly users earlier this month.

"At this scale, we sometimes encounter people in serious mental and emotional distress. We wrote about this a few weeks ago and had planned to share more after our next major update," the company said in a statement. "However, recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us, and we believe it’s important to share more now."

Currently, ChatGPT's protocols include a series of stacked safeguards that seek to limit ChatGPT's outputs according to specific safety limitations. When they work appropriately, ChatGPT is instructed not to provide self-harm instructions or comply with continued prompts on that subject, instead escalating mentions of bodily harm to human moderators and directing users to the U.S.-based 988 Suicide & Crisis Lifeline, the UK Samaritans, or findahelpline.com. As a federally-funded service, 988 has recently ended its LGBTQ-specific services under a Trump administration mandate — even as chatbot use among vulnerable teens grows.

Mashable Light Speed

In light of other cases in which isolated users in severe mental distress confided in unqualified digital companions, as well as previous lawsuits against AI competitors like Character.AI, online safety advocates have called on AI companies to take a more active approach to detecting and preventing harmful behavior, including automatic alerts to emergency services.

OpenAI said future GPT-5 updates will include instructions for the chatbot to "de-escalate" users in mental distress by "grounding the person in reality," presumably a response to increased reports of the chatbot enabling states of delusion. OpenAI said it is exploring new ways to connect users directly to mental health professionals before users report what the company refers to as "acute self harm." Other safety protocols could include "one-click messages or calls to saved emergency contacts, friends, or family members," OpenAI writes, or an opt-in feature that lets ChatGPT reach out to emergency contacts automatically.

Earlier this month, OpenAI announced it was upgrading its latest model, GPT-5, with additional safeguards intended to foster healthier engagement with its AI helper. Noting criticisms that the chatbot's prior models were overly sycophantic — to the point of potentially deleterious mental health outcomes — the company said its new model was better at recognizing mental and emotional distress and would respond differently to "high stakes" questions moving forward. GPT-5 also includes gentle nudges to end sessions that have gone on for extended periods of time, as individuals form increasingly dependent relationships with their digital companions.

Widespread backlash ensued, with GPT-4o users demanding the company reinstate the former model after losing their personalized chatbots. OpenAI CEO Sam Altman quickly conceded and brought back GPT-4o, despite previously acknowledging a growing problem of emotional dependency among ChatGPT users.

In the new blog post, OpenAI admitted that its safeguards degraded and performed less reliably in long interactions — the kinds that many emotionally dependent users engage in every day — and "even with these safeguards, there have been moments when our systems did not behave as intended in sensitive situations."

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.

Αναζήτηση
Κατηγορίες
Διαβάζω περισσότερα
άλλο
Dried Herbs Market: Industry Trends, Growth Dynamics, and Future Outlook
1. Introduction The Global Dried Herbs Market has emerged as a rapidly growing segment...
από Shweta Kadam 2025-11-11 09:31:07 0 966
άλλο
Europe Orthodontics Appliance Market: Patient Awareness and Adoption Trends
Among the most dependable areas of healthcare investment stands the global Europe...
από Priya Singh 2025-09-12 18:00:49 0 2χλμ.
άλλο
Evaluating the Size of the Security as a Service Market
  The Security as a Service Market size has expanded considerably, driven by the rapid...
από Sssd Ddssa 2025-10-23 05:11:11 0 991
Παιχνίδια
The Blood of Dawnwalker release date estimate, trailers, and latest news
The Blood of Dawnwalker release date estimate, trailers, and latest news As an Amazon...
από Test Blogger6 2025-06-23 10:00:16 0 2χλμ.
άλλο
Calcium Chloride Market Growth Drivers: Share, Value, Size, and Insights
"Global Executive Summary Calcium Chloride Market: Size, Share, and Forecast CAGR Value The...
από Shweta Kadam 2025-11-10 08:12:43 0 914