Sam Altman has been a blogger far longer than he's been in the AI business.
Now the CEO of OpenAI, Altman began his blog — titled simply, if concerningly, "Sam Altman" — in 2013. He was in year 3 of working at the startup accelerator YCombinator at the time, and would soon be promoted to president. The first page of posts contains no references to AI. Instead we get musings on B2B startup tools, basic dinner party conversation openers, and UFOs (Altman was a skeptic).
Then there was this sudden insight: "The most successful founders do not set out to create companies," Altman wrote. "They are on a mission to create something closer to a religion." Fast-forward to Altman's latest 2025 blog post, "The Gentle Singularity" — and, well, it's hard not to say mission accomplished.
"We are past the event horizon; the takeoff has started," is how Altman opens, and the tone only gets more messianic from there. "Humanity is close to building digital superintelligence." Can I get a hallelujah?
To be clear, the science does not suggest humanity is close to building digital superintelligence, a.k.a. Artificial General Intelligence. The evidence says we have built models that can be very useful in crunching giant amounts of information in some ways, wildly wrong in others. AI hallucinations appear to be baked into the models, increasingly so with AI chatbots, and they're doing damage in the real world.
There are no advances in reasoning, as was made plain in a paper also published this week: AI models sometimes don't see the answer when you tell them the answer.
Don't tell that to Altman. He's off on a trip to the future to rival that of Ray Kurzweil, the offbeat Silicon Valley guru who first proposed we're accelerating to a technological singularity. Kurzweil set his all-change event many decades down the line. Altman is willing to risk looking wrong as soon as next year: "2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world … It’s hard to even imagine what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year."
Mashable Light Speed
The "likely", "may," and "maybe" there are doing a lot of lifting. Altman may have "something closer to religion" in his AGI assumptions, but cannot cast reason aside completely. Indeed, shorn of the excitable sci-fi language, he's not always claiming that much (don't we already have "robots that can do tasks in the real world"?). As for his most outlandish claims, Altman has learned to preface them with a word salad that could mean anything. Take this doozy: "In some big sense, ChatGPT is already more powerful than any human who has ever lived." Can I get a citation needed?
Did Sam Altman just invite an AI environmental audit?
Altman's latest blog isn't all future-focused speculation. Buried within is the OpenAI CEO's first ever statement on ChatGPT's energy and water usage — and as with his needless drama over a Scarlett Johansson-like voice , opening that Pandora's Box may not go the way Altman thinks.
Since ChatGPT exploded in popularity in 2023, OpenAI — along with main AI rivals Google and Microsoft — have stonewalled researchers looking for details on their data center usage. "We don't even know how big models like GPT are," Sasha Luccioni, climate lead at open-source AI platform HuggingFace, told me last year. "Nothing is divulged, everything is a company secret."
Altman finally divulged, kinda. In the middle of a blog post, in parentheses, with the preface "people are often curious about how much energy a ChatGPT query uses," the OpenAI CEO offers two stats: "the average query uses about 0.34 watt-hours ... and about 0.000085 gallons of water."
There's no more data offered to confirm these stats; Altman doesn't even specify which model of ChatGPT. OpenAI hasn't responded to multiple follow-up requests from multiple news outlets. Altman has an obvious interest in downplaying the amount of energy and water OpenAI requires, and he's already doing it here with a little sleight-of-hand. It isn't the average query that concerns researchers like Luccioni; it's the enormous amount of energy and water required to train the models in the first place.
But now he's shown himself to be responsive to the "often curious," Altman has less of a reason to stonewall. Why not release all the data so others can replicate his numbers, you know, like scientists do? Meanwhile, battles over data center energy and water usage are brewing across the US. Luccioni has started an AI Energy Leaderboard that shows how wildly open source AI models vary.
This is serious stuff, because companies don't like to spend more on energy usage than they need to, and because there's buy-in. Meta and (to a lesser extent) Microsoft and Google are already on the board. Can OpenAI afford not to be?
In the end, the answer depends on whether Altman is building a company or more of a religion.