OpenAI announces new parental controls for teen ChatGPT users

0
95

OpenAI to add parental controls to ChatGPT

OpenAI is appealing directly to concerned parents as the AI giant announces plans for a new suite of parental oversight features.

The company explained in a new blog post that it is moving ahead with more robust tools for parents who hope to curb unhealthy interactions with its chatbot, as OpenAI faces its first wrongful death lawsuit after the death by suicide of a California teen.

The features — which will be released along with other mental health initiatives over the next 120 days — include account linking between parent and teen users and a tighter grip on chatbot interactions. Caregivers will be able to set how ChatGPT responds (in line with the model's "age-appropriate" setting) and disable chat history and memory.

OpenAI also plans to add parental notifications that flag when ChatGPT detects "a moment of acute distress," the company explains. The feature is still in development with OpenAI's panel of experts.

Mashable Light Speed

In addition to new options for parents, OpenAI said it would expand its Global Physician Network and real-time router, a feature that can instantly switch a user interaction to a new chat or reasoning model depending on the conversational context. OpenAI explains that "sensitive conversations" will now be moved over to one of the company's reasoning models, like GPT‑5-thinking, to "provide more helpful and beneficial responses, regardless of which model a person first selected."

Over the last year, AI companies have come under heightened scrutiny for failing to address safety concerns with their chatbots, which are increasingly being used as emotional companions by younger users. Safety guardrails have proven to be easily jailbroken, including limits on how chatbot's respond to dangerous or illicit user requests.

Parental controls have become a default first step for tech and social companies that have been accused of exacerbating the teen mental health crisis, enabling child sex abuse materials, and failing to address predatory actors online. But such features have their limitations, experts say, relying on the proactivity and energy of parents rather than that of companies. Other child safety alternatives, including app marketplace restrictions and online age verification, have remained controversial.

As debate and concern flare about their efficacy, AI companies have continued rolling out additional safety guardrails. Anthropic recently announced that its chatbot Claude would now end potentially harmful and abusive interactions automatically, including sexual content involving minors — while the current chat becomes archived, users can still began another conversation. Facing growing criticism, Meta announced it was limiting its AI avatars for teen users, an interim plan that involves reducing the number of available chatbots and training them not to discuss topics like self-harm, disordered eating, and inappropriate romantic interactions.

Pesquisar
Categorias
Leia Mais
Science
US Sees 90 Percent Drop In Heart Attack Deaths Over Last 50 Years
US Sees 90 Percent Drop In Heart Attack Deaths Over Last 50 YearsAccording to a new study that...
Por test Blogger3 2025-06-26 12:00:16 0 1K
Home & Garden
I Tried Lucille Ball’s Go-To Sunday Dinner (It's Just 6 Ingredients!)
I Tried Lucille Ball’s Go-To Sunday Dinner (It's Just 6 Ingredients!) Lucille Ball’s go-to...
Por Test Blogger9 2025-08-16 13:00:16 0 346
Home & Garden
This Frozen Laundry Detergent Hack Is Going Viral—But You Should Absolutely Not Do It
This Frozen Laundry Detergent Hack Is Going Viral—But You Should Absolutely Not Do It Credit:...
Por Test Blogger9 2025-06-02 15:00:38 0 2K
Technology
Get the Apple Pencil Pro for under $100
Best Apple deal: Save $30 on Apple Pencil Pro The perfect...
Por Test Blogger7 2025-05-30 12:00:26 0 2K
Jogos
The best indie games on PC 2025
The best indie games on PC 2025 As an Amazon Associate, we earn from qualifying purchases...
Por Test Blogger6 2025-06-04 12:00:17 0 2K