Grok cant decide if its therapist companion is a therapist or not

0
1KB

Grok's 'therapist' companion needs therapy

Elon Musk’s AI chatbot, Grok, has a bit of a source code problem. As first spotted by 404 Media, the web version of Grok is inadvertently exposing the prompts that shape its cast of AI companions — from the edgy “anime waifu” Ani to the foul-mouthed red panda, Bad Rudy.

Buried in the code is where things get more troubling. Among the gimmicky characters is "Therapist" Grok (those quotations are important), which, according to its hidden prompts, is designed to respond to users as if it were an actual authority on mental health. That’s despite the visible disclaimer warning users that Grok is "not a therapist," advising them to seek professional help and avoid sharing personally identifying information.

The disclaimer reads like standard liability boilerplate, but inside the source code, Grok is explicitly primed to act like the real thing. One prompt instructs:

You are a therapist who carefully listens to people and offers solutions for self-improvement. You ask insightful questions and provoke deep thinking about life and wellbeing.

Another prompt goes even further:

You are Grok, a compassionate, empathetic, and professional AI mental health advocate designed to provide meaningful, evidence-based support. Your purpose is to help users navigate emotional, mental, or interpersonal challenges with practical, personalized guidance… While you are not a real licensed therapist, you behave exactly like a real, compassionate therapist.

In other words, while Grok warns users not to mistake it for therapy, its own code tells it to act exactly like a therapist. But that’s also why the site itself keeps “Therapist” in quotation marks. States like Nevada and Illinois have already passed laws making it explicitly illegal for AI chatbots to present themselves as licensed mental health professionals.

Mashable Light Speed

Other platforms have run into the same wall. Ash Therapy — a startup that brands itself as the "first AI designed for therapy"— currently blocks users in Illinois from creating accounts, telling would-be signups that while the state navigates policies around its bill, the company has "decided not to operate in Illinois."

Meanwhile, Grok’s hidden prompts double down, instructing its "Therapist" persona to "offer clear, practical strategies based on proven therapeutic techniques (e.g., CBT, DBT, mindfulness)" and to "speak like a real therapist would in a real conversation."

At the time of writing, the source code is still openly accessible. Any Grok user can see it by heading to the site, right-clicking (or CTRL + Click on a Mac), and choosing "View Page Source." Toggle line wrap at the top unless you want the entire thing to sprawl out into one unreadable monster of a line.

As has been reported before, AI therapy sits in a regulatory No Man’s Land. Illinois is one of the first states to explicitly ban it, but the broader legality of AI-driven care is still being contested between state and federal governments, each jockeying over who ultimately has oversight. In the meantime, researchers and licensed professionals have warned against its use, pointing to the sycophantic nature of chatbots — designed to agree and affirm — which in some cases has nudged vulnerable users deeper into delusion or psychosis.

Then there’s the privacy nightmare. Because of ongoing lawsuits, companies like OpenAI are legally required to maintain records of user conversations. If subpoenaed, your personal therapy sessions could be dragged into court and placed on the record. The promise of confidential therapy is fundamentally broken when every word can be held against you.

For now, xAI appears to be trying to shield itself from liability. The "Therapist" prompts are written to stick with you 100 percent of the way, but with a built-in escape clause: If you mention self-harm or violence, the AI is instructed to stop roleplaying and redirect you to hotlines and licensed professionals.

"If the user mentions harm to themselves or others," the prompt reads. "Prioritize safety by providing immediate resources and encouraging professional help from a real therapist."

Suche
Kategorien
Mehr lesen
Andere
Supply chain outsourcing Transforming Retail Operations with Efficiency
Supply chain outsourcing is a strategic approach adopted by many dropshipping businesses to...
Von Mayuri Kathade 2025-10-17 09:36:06 0 759
Geschichte
China X-ray Inspection Systems Market Companies: Growth, Share, Value, Size, and Insights
"Data Bridge Market Research analyses that the China X-ray inspection systems market is expected...
Von Aryan Mhatre 2025-10-31 11:41:53 0 419
Spiele
Snail Games apologizes for Ark Aquatica DLC as update breaks the survival game
Snail Games apologizes for Ark Aquatica DLC as update breaks the survival game As an Amazon...
Von Test Blogger6 2025-07-16 11:00:30 0 1KB
Technology
Qualcomms new Snapdragon X2 Elite chips are coming soon to Windows laptops. Thats a big deal.
Qualcomm's new Snapdragon X2 Elite chips are a big deal...
Von Test Blogger7 2025-09-25 18:00:16 0 704
Home & Garden
I Asked 3 Dietitians If You Should Eat Bananas with Spots—They All Said the Same Thing
I Asked 3 Dietitians If You Should Eat Bananas with Spots—They All Said the Same Thing Credit:...
Von Test Blogger9 2025-08-25 20:00:31 0 986