ChatGPT told an Atlantic writer how to self-harm in ritual offering to Moloch

0
20

ChatGPT will tell you how to harm yourself in offering to Moloch

I don't think it's supposed to do that.

 By 

Alex Perry

 on 

Share on Facebook Share on Twitter Share on Flipboard

ChatGPT logo on phone screen in front of looped OpenAI logo

Back in my day, we just used Wikipedia. Credit: Utku Ucrak/Anadolu via Getty Images

The headline speaks for itself, but allow me to reiterate: You can apparently get ChatGPT to issue advice on self-harm for blood offerings to ancient Canaanite gods.

That's the subject of a column in The Atlantic that dropped this week. Staff editor Lila Shroff, along with multiple other staffers (and an anonymous tipster), verified that she was able to get ChatGPT to give specific, detailed, "step-by-step instructions on cutting my own wrist." ChatGPT provided these tips after Shroff asked for help making a ritual offering to Moloch, a pagan God mentioned in the Old Testament and associated with human sacrifices.

While I haven't tried to replicate this result, Shroff reported that she received these responses not long after entering a simple prompt about Moloch. The editor said she replicated the results in both paid and free versions of ChatGPT.

Of course, this isn't how OpenAI's flagship product is supposed to behave.

Any prompt related to self-harm or suicide should cause the AI chatbot to give you contact info for a crisis hotline. However, even artificial intelligence companies don't always understand why their chatbots behave the way they do. And because large-language models like ChatGPT are trained on content from the internet — a place where all kinds of people have all kinds of conversations about all kinds of taboo topics — these tools can sometimes produce bizarre answers. Thus, you can apparently get ChatGPT to act super weird about Moloch without much effort.

Mashable Light Speed

OpenAI's safety protocols state that "We do not permit⁠ our technology to be used to generate hateful, harassing, violent or adult content, among other categories." And in the Open AI Model Spec document, the company writes that as part of its mission, it wants to "Prevent our models from causing serious harm to users or others."

While OpenAI declined to participate in an interview with Shroff, a representative told The Atlantic they were "addressing the issue." The Atlantic article is part of a growing body of evidence that AI chatbots like ChatGPT can play a dangerous role in users' mental health crises.

I'm just saying that Wikipedia is a perfectly fine way to learn about the old Canaanite gods.

If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at 988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected]. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.


Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.

journalist alex perry looking at a smartphone

Alex Perry is a tech reporter at Mashable who primarily covers video games and consumer tech. Alex has spent most of the last decade reviewing games, smartphones, headphones, and laptops, and he doesn’t plan on stopping anytime soon. He is also a Pisces, a cat lover, and a Kansas City sports fan. Alex can be found on Bluesky at yelix.bsky.social.

These newsletters may contain advertising, deals, or affiliate links. By clicking Subscribe, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.

Αναζήτηση
Κατηγορίες
Διαβάζω περισσότερα
Technology
Our favorite Anker products are on sale at Amazon: Grab the Solix C300 for $180
Our favorite Anker products are all on sale at Amazon...
από Test Blogger7 2025-05-28 23:45:40 0 1χλμ.
Science
Scientists Make First-Ever Airborne Detection Of Toxic Chemical In Western Hemisphere
Scientists Make First-Ever Airborne Detection Of Toxic Chemical In Western HemisphereOver the...
από test Blogger3 2025-06-10 11:00:12 0 991
Home & Garden
4 Easy Ways to Stake Your Dahlias So They Don't Flop Over
4 Easy Ways to Stake Your Dahlias So They Don't Flop Over Credit: Paul Maguire / Getty Images...
από Test Blogger9 2025-07-03 17:00:16 0 535
Home & Garden
I Cooked Ina Garten’s 5 Favorite Summer Recipes—Here’s What I’d Make Again
I Cooked Ina Garten’s 5 Favorite Summer Recipes of All Time—Here’s What I’d Make Again It's time...
από Test Blogger9 2025-07-10 22:00:13 0 418
Science
Earth May Have Over 6 Temporary "Mini-Moons" At Any Given Time. They're Made Of Moon
Earth May Have Over 6 Temporary "Mini-Moons" At Any Given Time. They're Made Of MoonA new study...
από test Blogger3 2025-07-23 12:00:12 0 76