Question the hype: AI still faces huge barriers

AI models have developed at a pace that was unimaginable just two years ago, but there are three big barriers to further progress, says Lewis Z Liu

As the current revolution in artificial intelligence accelerates, it is more important than ever for society to engage in a well-informed debate about this technology. Through this column, I aim to provide a balanced perspective on the state of AI, as I believe that an informed audience – one that understands the real issues – will be best positioned to both benefit from and mitigate the risks of the AI revolution. Misconceptions about AI abound, fuelled by a range of actors, from self-serving incumbents like Sam Altman and Google to political operators like Musk, Trump and China’s Xi. In a landscape where power is at stake, it is only natural for influential figures to shape the narrative of such a transformative technology to serve their own interests.

First, I want to outline what is now possible that was unimaginable just two years ago. AI models can now compress the entire internet with remarkable fidelity and “converse” with users or process text inputs with near-human fluency. This breakthrough is fundamentally reshaping our understanding of what a “computer” is and how we interact with technology.

The most tangible change is in user experience (UX, in software industry terms). Those old enough to remember the command prompt – where users had to enter lines of code in specific syntax in Microsoft DOS, the precursor to Windows – will recall it as the successor to punch cards in the 1980s. The command-line interface was later replaced by the Graphical User Interface (GUI), which began with the advent of the mouse and evolved into the touchscreen interfaces we now use on mobile devices. However, even today’s most advanced smartphones still rely primarily on graphical or visual interaction.

LLMs are changing this entirely. Thanks to their natural language capabilities, we can now interact with AI much like we would with another human – through chat or voice commands. My two sons, ages five and seven, for example, can’t imagine a world where you can’t simply talk to your car to ask for directions or change the music.

AI agents

This brings us to the concept of AI agents, currently the hottest trend in the Bay Area. During a recent visit, I couldn’t attend a dinner party or step into a bar without hearing discussions about them. For those unfamiliar with the term, an AI agent is a software system powered by multiple LLMs working together to function as “specialist employee” in the digital space. Microsoft Teams, for example, recently introduced a feature allowing users to add AI agents to chatrooms just as they would a human colleague. AI agents are already performing high-value tasks. For instance, some can now call healthcare insurers – engaging in real conversations – to process claims on behalf of medical clinics. Others function as junior lawyers, redlining contracts during negotiations and presenting revisions to senior attorneys.

Even if AI technology were to stagnate (which it won’t), there is still at least a decade’s worth of innovation to be realised. With OpenAI’s original ChatGPT 3.5 and DeepSeek’s latest advancements in making LLMs economically accessible, the way we interact with computers is undergoing a permanent transformation.

We’re nowhere near AGI

This is undeniably exciting, but claims that we are “close to artificial general intelligence (AGI)” amount to little more than self-serving hype. In my view, three fundamental unsolved problems must be addressed before the AI industry can “level up” to the next phase of intelligence. These are deep, complex challenges – some of which could be solved within the next year, while others may take decades.

The first challenge is memory – the fact that LLMs cannot truly “remember” large amounts of new data, past conversations, or instructions. If you’ve ever had a long conversation with ChatGPT, you may have noticed that it sometimes forgets key details or even fabricates information. In reality, hallucinations and poor memory are two sides of the same coin; LLMs are inherently static. Once a model is trained – meaning it has compressed and “memorised” its training data – updating or retraining it with new information is extremely costly. This is a major advantage we have as biological beings: our brains continuously learn and adapt, whereas LLMs remain largely fixed after training.  

The second major challenge is machine reasoning. For example, when you input “1 + 1 = ?” into ChatGPT, it responds in one of two ways: either by recalling countless instances of “1 + 1 = 2” from its training data or by generating a Python script to compute the answer. However, at no point does the AI possess an intrinsic understanding of the concepts of “1,” “2,” “addition”, or “equals.” While the current state of the art in GPT-o3 is impressive, it remains mired in controversy over its training and test data sets and is reported to struggle to multiply past 13 X 13.  Achieving “true reasoning” will require a fundamental shift in AI architecture.

The third challenge is physical context and manipulation. Meta’s Chief AI Officer, Yann LeCun, has stated that contextual understanding is essential for advancing AI to the next stage. Similarly, Jensen Huang has identified AI’s integration with robotics as the next frontier. The ability for AI to interact with and learn from its environment – whether physical or digital – is a crucial aspect of human intelligence. A child doesn’t learn solely by reading Wikipedia; early development is shaped far more by interacting with and understanding their surroundings. We are already seeing early investments in this area across both the US and China. Every year, Beijing’s televised Chinese New Year Gala – watched by much of the nation – features traditional performances. However, this year was different: one of the traditional dances was performed in perfect sync by robots alongside human dancers. This is a glimpse into the future.

AI is already transforming the world, but it’s crucial to be specific about how it is evolving rather than getting caught up in marketing hype or extreme scepticism. There are still deep, complex problems that must be solved to push the frontier forward. With that in mind, I leave you with a provocative question posed to me by a prominent AI investor two weeks ago: “”If AI can perform math and reasoning 99 per cent better than most humans, does it really matter philosophically that it doesn’t truly understand the concepts behind it?”

Dr Lewis Z Liu is a founder, investor, and AI scientist. As Founder & CEO of Eigen Technologies, he pioneered small language models and AI enterprise adoption, drawing on his Oxford Physics PhD. He sold Eigen in 2024 and now splits his time between New York, London, and San Francisco.

Related posts

Rolls-Royce brings back dividends and unveils £1bn share buyback

Serco’s profit jumps as defence arm grows

LSEG smashes expectations with strong growth and bold 2025 outlook