GPT-5 failed to wow users, but its less flashy delivery signals a positive shift for AI from at-all-costs advancement to actual use-case, writes Lewis Liu
I was carrying boxes of pizza back to a park in North London last Sunday, chatting with a fellow AI founder at our kids’ playdate, when we both stopped mid-stride. We’d stumbled onto a counterintuitive truth: the fact that GPT-5 and other foundation models seem to have plateaued could be one of the best things to happen to the AI industry.
The specific topic we were discussing was how GPT-5 feels more like technical optimisation than paradigm-shifting capability. When it launched, Sam Altman did his usual hype-building, promising GPT-5 would run entire nations. It delivered some vaguely impressive features: an updated “agentic operator”, consolidated reasoning-language models, and so forth. But after significant fanfare, the vast majority of press and users were largely disappointed. Many even complained it wasn’t as good as previous versions or competitors.
Even as I type this, my wife is complaining from across our dining room table that GPT-5 seems to have much less memory context in conversations than it used to, calling it “useless”. Interestingly, she’s not imagining the lesser capability; OpenAI is deliberately reducing tokens processed to cut compute costs. Most experts now say GPT-5 is less of a “raw capability” play and more of an “optimisation” play, with the company attempting to reverse the deep negative gross margins of each LLM call while stabilizing outputs so enterprises can better trust their systems.
To me, this signals we’re entering an era of product and use-case innovation rather than pure technological advancement. And this is a good thing.
AI slowdown is welcome
The vast majority of founders I’ve spoken with, wearing either my investor hat or fellow founder hat, are simply fatigued by the foundation model hamster wheel. Will OpenAI completely disrupt my business? Are the APIs stable enough to develop on, or will Anthropic completely change rate limits again? Will Deepseek launch another reasoning model that changes everything?
On top of this, the AI “vibe coding” tools that bleeding-edge founders use, like Cursor or Lovable, have had such dramatic changes to pricing and product policy that engineers don’t feel safe committing to one tool or another.
Even with our current LLM technology, we can build 10+ years’ worth of new products and applications. LLMs are genuinely revolutionary in how we interact with machines. With clear evidence that “raw capability” is plateauing, the founders I’ve talked to finally feel they can focus on solving problems that were previously unsolvable without LLMs.
The fear has always been committing to product features or moats that rely on either the eventual progress of LLM capability or the continued cost optimization of APIs themselves. With the space moving so quickly, most VCs I’ve talked to bemoan that the only moat is “speed”. But speed doesn’t necessarily build the most thoughtful or complete solutions.
The math doesn’t math
The second major shift is a reckoning that the “math doesn’t math” when it comes to AI unit economics. OpenAI’s active monthly users declined for the first time this June, and its losses are projected to triple to $14bn in 2026, according to The Information.
Similarly, on the application layer, Perplexity, a well-known AI search tool attempting to compete against Google, runs deep into negative margin territory yet recently launched a bid to buy Chrome for $34.5bn in cash, nearly twice its own valuation, while generating only an estimated $80m annual run rate revenue. As a funny side note, when I had Claude copyedit this column, it even doubted this news in its comments: “Major factual concern: The Perplexity bidding $34.5bn for Chrome claim needs immediate verification. This sounds extremely implausible given Perplexity’s size and the fact that Chrome isn’t for sale. If this is incorrect, it undermines your credibility.”
That said, some companies are growing faster than ever. Lovable, the vibe coding tool, reached $100m in recurring revenue just eight months after passing $1m. Several similar companies exist, though none at this record speed. There are genuine use cases warranting explosive growth and legitimate optimism.
Most companies experiencing explosive growth, however, struggle to achieve even positive gross margins given the constant need to chase new features requiring upgrades to more and more powerful underlying models. With GPT-5 signalling a slowdown in raw capability development and focusing on margin and stability optimization, this might give other AI companies a chance to optimize their unit economics as well.
The takeaway: GPT-5 signals a paradigm shift
So what should we make of people like my wife getting annoyed with the latest GPT-5? This is a natural part of adjustment and optimisation for such explosively powerful technology. I’m optimistic that time invested in improving AI use cases, products and economics will enable AI to become truly embedded in our ecosystem.
As I’ve written previously, today’s AI tools lack the personal context and situational awareness needed to be truly transformative for workers and consumers. Stabilizing LLMs will enable founders to build revolutionary products that businesses and consumers actually trust. AI is too powerful to remain just a trendy fad, but it needs financially sustainable technology built for the right use cases to genuinely change the world.
This paradigm shift might just make that possible.
Dr Lewis Z Liu is chief AI officer at Sirion