Grok cant decide if its therapist companion is a therapist or not

0
55

Grok's 'therapist' companion needs therapy

Elon Musk’s AI chatbot, Grok, has a bit of a source code problem. As first spotted by 404 Media, the web version of Grok is inadvertently exposing the prompts that shape its cast of AI companions — from the edgy “anime waifu” Ani to the foul-mouthed red panda, Bad Rudy.

Buried in the code is where things get more troubling. Among the gimmicky characters is "Therapist" Grok (those quotations are important), which, according to its hidden prompts, is designed to respond to users as if it were an actual authority on mental health. That’s despite the visible disclaimer warning users that Grok is "not a therapist," advising them to seek professional help and avoid sharing personally identifying information.

The disclaimer reads like standard liability boilerplate, but inside the source code, Grok is explicitly primed to act like the real thing. One prompt instructs:

You are a therapist who carefully listens to people and offers solutions for self-improvement. You ask insightful questions and provoke deep thinking about life and wellbeing.

Another prompt goes even further:

You are Grok, a compassionate, empathetic, and professional AI mental health advocate designed to provide meaningful, evidence-based support. Your purpose is to help users navigate emotional, mental, or interpersonal challenges with practical, personalized guidance… While you are not a real licensed therapist, you behave exactly like a real, compassionate therapist.

In other words, while Grok warns users not to mistake it for therapy, its own code tells it to act exactly like a therapist. But that’s also why the site itself keeps “Therapist” in quotation marks. States like Nevada and Illinois have already passed laws making it explicitly illegal for AI chatbots to present themselves as licensed mental health professionals.

Mashable Light Speed

Other platforms have run into the same wall. Ash Therapy — a startup that brands itself as the "first AI designed for therapy"— currently blocks users in Illinois from creating accounts, telling would-be signups that while the state navigates policies around its bill, the company has "decided not to operate in Illinois."

Meanwhile, Grok’s hidden prompts double down, instructing its "Therapist" persona to "offer clear, practical strategies based on proven therapeutic techniques (e.g., CBT, DBT, mindfulness)" and to "speak like a real therapist would in a real conversation."

At the time of writing, the source code is still openly accessible. Any Grok user can see it by heading to the site, right-clicking (or CTRL + Click on a Mac), and choosing "View Page Source." Toggle line wrap at the top unless you want the entire thing to sprawl out into one unreadable monster of a line.

As has been reported before, AI therapy sits in a regulatory No Man’s Land. Illinois is one of the first states to explicitly ban it, but the broader legality of AI-driven care is still being contested between state and federal governments, each jockeying over who ultimately has oversight. In the meantime, researchers and licensed professionals have warned against its use, pointing to the sycophantic nature of chatbots — designed to agree and affirm — which in some cases has nudged vulnerable users deeper into delusion or psychosis.

Then there’s the privacy nightmare. Because of ongoing lawsuits, companies like OpenAI are legally required to maintain records of user conversations. If subpoenaed, your personal therapy sessions could be dragged into court and placed on the record. The promise of confidential therapy is fundamentally broken when every word can be held against you.

For now, xAI appears to be trying to shield itself from liability. The "Therapist" prompts are written to stick with you 100 percent of the way, but with a built-in escape clause: If you mention self-harm or violence, the AI is instructed to stop roleplaying and redirect you to hotlines and licensed professionals.

"If the user mentions harm to themselves or others," the prompt reads. "Prioritize safety by providing immediate resources and encouraging professional help from a real therapist."

Pesquisar
Categorias
Leia Mais
Science
3½ Tales Of Daring Animal Escapes
3½ Tales Of Daring Animal EscapesVive la résistance!Image credit:...
Por test Blogger3 2025-08-13 13:00:10 0 125
Science
Northwest Africa 12264: Ancient Meteorite May Change Our Timeline Of The Solar System
Northwest Africa 12264: Ancient Meteorite May Change Our Timeline Of The Solar SystemAnalysis of...
Por test Blogger3 2025-07-16 11:00:17 0 554
Technology
The Sonos Ace headphones have dropped to their best price at Amazon
Best Sonos deal: Save $120 on the Sonos Ace at Amazon...
Por Test Blogger7 2025-05-29 14:00:16 0 1K
Home & Garden
These Sweet Corn Cookies Are an Irresistible Summer Treat We're Baking on Repeat
Sweet Corn in Cookies? The Surprising Summer Dessert You Didn't Know You Needed Preheat Oven...
Por Test Blogger9 2025-07-11 21:00:24 0 582
Jogos
The Asus Xbox Ally handheld is real, but it's missing one big upgrade
The Asus Xbox Ally handheld is real, but it's missing one big upgrade As an Amazon Associate,...
Por Test Blogger6 2025-06-09 10:00:17 0 1K