In the months prior to the release of his widely-anticipated AI model update, GPT-5, Sam Altman's hype train built up quite a head of steam.
Long before he was OpenAI CEO, Altman wrote that "successful founders" of tech companies are actually building something "closer to a religion," and many of his 2025 statements attest to that. Take his "Gentle Singularity" post in June, in which Altman insists "humanity is close to building digital superintelligence" — suggesting he was seeing incredible new capabilities in GPT-5 and just couldn't wait to unveil them this August.
What capabilities, pray tell? Well, as soon as Altman started dropping details, it became clear that this hype train was making local stops in some pretty mundane places. Altman told alt-right podcaster Theo Von he felt "useless" when GPT-5 ... wrote a good email for him. And on X a.k.a. Twitter, Altman enthused about GPT-5 ... recommending a list of TV shows.
Granted, Altman said it was a particularly complicated email, and the list was "thought provoking TV shows about AI." These results, arguably, offered a deeper appearance of understanding than one might expect from GPT-4 and other OpenAI models. But if Altman were trying to signal that GPT-5 is an incremental improvement on its predecessor, rather than "true digital superintelligence," this is the sort of thing he'd say.
GPT-5 problems behind the scenes
Altman's careful language tracks with a new and devastating report from Silicon Valley scoop machine The Information. According to multiple sources inside OpenAI and its partner Microsoft, the upgrades in GPT-5 are mostly in the areas of solving math problems and writing software code — and even they "won’t be comparable to the leaps in performance of earlier GPT-branded models, such as the improvements between GPT-3 in 2020 and GPT-4 in 2023."
That's not for want of trying. The Information also reports that the first attempt to create GPT-5, codenamed Orion, was actually launched as GPT-4.5 because it wasn't enough of a step up, and that insiders believed none of OpenAI's experimental models were worthy of the name GPT-5 as recently as June.
Mashable Light Speed
The major problem — "a dwindling supply of high-quality web data" to train the model on — was supplemented by a problem of scale. OpenAI researchers were reportedly unable to get their full GPT-5 model to produce the same results found when it was in its infancy.
"Pure scaling is not getting us to [digital superintelligence]," veteran AI expert and noted skeptic Gary Marcus wrote in the wake of the report. "Returns are diminishing."
That appears to be true for AI models once they're incorporated into AI Agents, too. Yunyu Lin, a researcher for AI metrics startup Penrose, recently discovered that multiple Large Language Models (LLMs), including OpenAI's o3 and o4 mini, degrade over time — even when it comes to their supposed specialty, basic math.
When asked to do accounting using the company's QuickBooks, the models had error rates of roughly 15 percent within 12 months — basically committing unintentional accounting fraud as they tried to find ways to balance the books. But OpenAI's models were the worst, unable to complete a single month of accounting: They "consistently got stuck in loops," Lin wrote.
Why OpenAI's future could still be bright
So if GPT-5 degrades over time on basic math tasks like its predecessors and competitors, CPAs can breathe easy. Same goes for software engineers, given an increasing number of reports that AI-written code turns out to have more bugs than first expected (and in extreme cases, AI coding assistants have gone rogue, deleting entire company databases).
But what about Altman himself? He's probably fine, even though OpenAI has seen a number of high-profile defections in the last year (with many executives poached by Meta). OpenAI income, and its user base, have skyrocketed in the last year. The company is on track to beat its $12.7 billion annual income projection for 2025, even as it is projected to burn through about $8 billion in cash this year — the cost of researching new models like GPT-5.
The promise of incremental improvements that could bring more paying customers on board, plus the possibility that OpenAI could go public in 2026, is helping it secure as much as $40 billion in funding by the end of the year. That's laying a lot of track for the hype train that is GPT-5, not to mention its successors.
Back in 2024, Altman suggested OpenAI models could continue improving for "another 3 or 4" model generations. In which case, we'll see you back here in a few years for an explanation of why GPT-8 isn't true digital superintelligence either.