Grok cant decide if its therapist companion is a therapist or not

0
82

Grok's 'therapist' companion needs therapy

Elon Musk’s AI chatbot, Grok, has a bit of a source code problem. As first spotted by 404 Media, the web version of Grok is inadvertently exposing the prompts that shape its cast of AI companions — from the edgy “anime waifu” Ani to the foul-mouthed red panda, Bad Rudy.

Buried in the code is where things get more troubling. Among the gimmicky characters is "Therapist" Grok (those quotations are important), which, according to its hidden prompts, is designed to respond to users as if it were an actual authority on mental health. That’s despite the visible disclaimer warning users that Grok is "not a therapist," advising them to seek professional help and avoid sharing personally identifying information.

The disclaimer reads like standard liability boilerplate, but inside the source code, Grok is explicitly primed to act like the real thing. One prompt instructs:

You are a therapist who carefully listens to people and offers solutions for self-improvement. You ask insightful questions and provoke deep thinking about life and wellbeing.

Another prompt goes even further:

You are Grok, a compassionate, empathetic, and professional AI mental health advocate designed to provide meaningful, evidence-based support. Your purpose is to help users navigate emotional, mental, or interpersonal challenges with practical, personalized guidance… While you are not a real licensed therapist, you behave exactly like a real, compassionate therapist.

In other words, while Grok warns users not to mistake it for therapy, its own code tells it to act exactly like a therapist. But that’s also why the site itself keeps “Therapist” in quotation marks. States like Nevada and Illinois have already passed laws making it explicitly illegal for AI chatbots to present themselves as licensed mental health professionals.

Mashable Light Speed

Other platforms have run into the same wall. Ash Therapy — a startup that brands itself as the "first AI designed for therapy"— currently blocks users in Illinois from creating accounts, telling would-be signups that while the state navigates policies around its bill, the company has "decided not to operate in Illinois."

Meanwhile, Grok’s hidden prompts double down, instructing its "Therapist" persona to "offer clear, practical strategies based on proven therapeutic techniques (e.g., CBT, DBT, mindfulness)" and to "speak like a real therapist would in a real conversation."

At the time of writing, the source code is still openly accessible. Any Grok user can see it by heading to the site, right-clicking (or CTRL + Click on a Mac), and choosing "View Page Source." Toggle line wrap at the top unless you want the entire thing to sprawl out into one unreadable monster of a line.

As has been reported before, AI therapy sits in a regulatory No Man’s Land. Illinois is one of the first states to explicitly ban it, but the broader legality of AI-driven care is still being contested between state and federal governments, each jockeying over who ultimately has oversight. In the meantime, researchers and licensed professionals have warned against its use, pointing to the sycophantic nature of chatbots — designed to agree and affirm — which in some cases has nudged vulnerable users deeper into delusion or psychosis.

Then there’s the privacy nightmare. Because of ongoing lawsuits, companies like OpenAI are legally required to maintain records of user conversations. If subpoenaed, your personal therapy sessions could be dragged into court and placed on the record. The promise of confidential therapy is fundamentally broken when every word can be held against you.

For now, xAI appears to be trying to shield itself from liability. The "Therapist" prompts are written to stick with you 100 percent of the way, but with a built-in escape clause: If you mention self-harm or violence, the AI is instructed to stop roleplaying and redirect you to hotlines and licensed professionals.

"If the user mentions harm to themselves or others," the prompt reads. "Prioritize safety by providing immediate resources and encouraging professional help from a real therapist."

Поиск
Категории
Больше
Technology
These early Prime Day deals can save you up to 42% on a new mesh WiFi system
Best WiFi mesh system early Prime Day deals Get a perfect...
От Test Blogger7 2025-06-30 18:00:19 0 803
Technology
Texas forces Google and Apple to verify ages in app stores. Teen social media ban could be next.
Texas imposes strict new age verification law. Teen social media ban could be next....
От Test Blogger7 2025-05-28 23:45:39 0 1Кб
Music
The Best Prog Rock Song of Each Year Since 1969
The Best Prog Rock Song of Each Year Since 1969Here is the best progressive rock song of each...
От Test Blogger4 2025-08-11 14:00:04 0 79
Science
President Trump's Cuts To USAID Could Result In A "Staggering" 14 Million Avoidable Deaths By 2030
President Trump's Cuts To USAID Could Result In A "Staggering" 14 Million Avoidable Deaths By...
От test Blogger3 2025-07-04 17:00:11 0 717
Home & Garden
The Amount of Pretty Furniture Hiding in Amazon’s Outlet May Actually Surprise You—and Savings Are Up to 50%
There’s Actually Some Really Nice Furniture in the Amazon Outlet Right Now—Grab Up to 50% Off I...
От Test Blogger9 2025-07-24 12:00:24 0 371