OpenAI says GPT-5.2 is safer for mental health. What does that mean?

0
15

OpenAI says new model GPT-5.2 is 'safer' for mental health

Today, OpenAI launched GPT-5.2, touting its stronger safety performance in regard to mental health.

"With this release, we continued our work to strengthen our models' responses in sensitive conversations⁠, with meaningful improvements in how they respond to prompts indicating signs of suicide or self-harm, mental health distress, or emotional reliance on the model," OpenAI's blog post states.

OpenAI has recently been hit with criticism and lawsuits, which accuse ChatGPT of contributing to some users' psychosis, paranoia, and delusions. Some of those users died by suicide after lengthy conversations with the AI chatbot, which has had a well-documented problem with sycophancy.

In response to a wrongful death lawsuit concerning the suicide of 16-year-old Adam Raine, OpenAI denied that the LLM was responsible, claimed ChatGPT directed the teenager to seek help for his suicidal thoughts, and stated that the teenager "misused" the platform. At the same time, OpenAI pledged to improve how ChatGPT responds when users display warning signs of self-harm and mental health crises. As many users develop emotional attachments to AI chatbots like ChatGPT, AI companies are facing growing scrutiny for the safeguards they have in place to protect users.

Now, OpenAI claims that its latest ChatGPT models will offer "fewer undesirable responses" in sensitive situations.

Mashable Light Speed

In the blog post announcing GPT-5.2, OpenAI states that GPT-5.2 scores higher on safety tests related to mental health, emotional reliance, and self-harm compared to GPT-5.1 models. Previously, OpenAI has said it's using "safe completion," a new safety-training approach that balances helpfulness and safety. More information on the new models' performance can be found in the 5.2 system card.

a table showing gpt-5.2 performance on mental health safety tests compared to gpt-5.1

Credit: Screenshot: OpenAI

However, the company has also observed that GPT-5.2 refuses fewer requests for mature content, especially sexualized text. But this apparently doesn't impact users OpenAI knows to be underage, as the company states that its age safeguards "appear to be working well." OpenAI applies additional content protections for minors, including reducing access to content containing violence, gore, viral challenges, roleplay of sexual, romantic, or violent nature, and "extreme beauty standards."

An age prediction model is also in the works, which will allow ChatGPT to estimate its users' ages to help provide more age-appropriate content for younger users.

Earlier this fall, OpenAI introduced parental controls in ChatGPT, including monitoring and restricting certain types of use.

OpenAI isn't the only AI company accused of exacerbating mental health issues. Last year, a mother sued Character.AI after her son's death by suicide, and another lawsuit claims children were severely harmed by that platform's "characters." Character.AI has been declared unsafe for teens by online safety experts. Likewise, AI chatbots from a variety of platforms, including OpenAI, have been declared unsafe for teens' mental health according to child safety and mental health experts.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

Search
Categories
Read More
Science
Dead Sea Scrolls May Have Been Written By Original Authors Of The Bible
Dead Sea Scrolls May Have Been Written By Original Authors Of The BibleA new artificial...
By test Blogger3 2025-06-04 19:00:10 0 2K
Other
In-depth Research and Industry Size of Generative AI in Media and Entertainment Market
  The Generative AI in Media and Entertainment Market research industry size is capturing...
By Sssd Ddssa 2025-11-05 03:58:50 0 510
Technology
Get a free Soundcore Select 4 speaker when you pick up the Soundcore Sleep A30 earbuds at Amazon
Free Soundcore Select 4 speaker with Soundcore Sleep A30 at Amazon...
By Test Blogger7 2025-09-18 11:00:21 0 1K
Science
Macaws Learn From Watching Other Macaws Interact – A Kind Of Imitation We Thought Was Unique To Humans
A Trait We Thought Was Unique To Humans Has Just Been Identified In Another Species For The First...
By test Blogger3 2025-09-04 16:00:13 0 1K
Music
Indie Rock Band Sends Cease and Desist to Homeland Security
'Go F--- Yourselves' - Indie Rock Band Sends Cease and Desist to U.S. Department of Homeland...
By Test Blogger4 2025-07-11 16:00:03 0 2K