12 predictions for AI in 2026

A year of building an AI business has given Lewis Liu a lot to think about. He gives us his 12 AI assumptions to live by in 2026

‘Tis the season for 2026 predictions, a ritual I usually hate. Every December, pundits confidently forecast the future, only to forget their words 12 months later. I’m hardly innocent: I’ve made my share of half-formed predictions across podcasts and blog posts, promptly buried by the next news cycle.

This year feels different. Building a new company has shifted my thinking from forecasts to assumptions; it’s the kind I’m organising my company and bets around.  So here’s my version of the 12 Days of Christmas: 12 AI assumptions for 2026 that I’m actively betting on.

These aren’t abstract thought experiments. They’re shaped by a year of building a venture from scratch, investing in early-stage companies, talking to 300+ executives across a variety of industries, advising policymakers and spending time with people who are both deeply technical and unusually plugged into how AI is actually being deployed.

Context and privacy will merge as the integrated hot topic in AI. Privacy and context are two sides of the same coin: you can’t have truly effective AI without deep context, and you can’t share deep context without robust, granular privacy controls woven in. This remains under-developed as the industry focuses on foundational LLMs and basic GPT-wrapper applications, but if you want “live AI” in your workflow, context and privacy as twin-concepts will need to be borne together.

Vibe-coding will enter a trough of disillusionment, then mature. We’ve seen precipitous drops in AI-coding agents like Lovable and Cursor; AI coding platforms saw a 76 per cent drop in activity last quarter. As one meme puts it: “Vibe coding allows two engineers to develop the technical debt of 50 engineers.” These AI agents generate so much code that human engineers can’t debug or iterate on it. Yet these tools are powerful (we use them daily), and a more measured approach will emerge with the next generation, alongside improved guardrails.

Human originality will be rewarded as a foil to AI slop. Let’s face it: we’re all sick of GPT-laden lazy AI slop. We can now recognize it a mile away, and genuinely human-originated content will be appreciated in a sea of beige, thoughtless AI-generated text. Personalized AI that amplifies unique human voices offers a potential compromise.

AI will finally make organizational proprietary data useful. If we can solve privacy and context, all the unstructured and siloed data in an organization can be leveraged for greater insights and genuine automation. Data – customer data, project precedent, documented employee experience – will become as much a differentiating factor for firms as humans and culture if paired with the right controls and governance.

Boring, traditional industries (legal, finance, real estate, energy) with thoughtful AI adoption will start outcompeting their peers. Believe it or not, law firms are now among the most avid adopters of AI in knowledge management and document review, while energy development companies are actively exploring how AI can improve site construction. The gains for traditional companies are much greater than for already tech-forward ones, allowing them to leapfrog competitors.

Real recurring revenue with genuine unit economics will start to overshadow “vibe revenue”. 2023-2025 saw unprecedented growth in start-up “revenue”. If you peel back the onion, much of this revenue isn’t truly recurring, it is just repackaged services, and carries extremely low or negative margins. 2026 will bring more mature AI business practices – and yes, gross margins and real revenue will matter again.

Populist anti-elite skepticism towards Silicon Valley will rise significantly. Sam Altman’s rhetoric about “replacing humans” isn’t landing well with white-collar workers. Combined with increasing wealth inequality, this will fuel resentment not just among blue-collar workers, but among relatively affluent professionals too. As AI builders, we must build AI that works for humans, not just the oligarchic few.

Investment in physical AI in the West will dramatically increase. As part of Trump’s “American dynamism” push, investment will surge in bringing AI to the physical world the way China already has with its automated dark factories and advanced robotics. We already see Bezos launching Build AI with $6.2bn to bring AI to manufacturing, an attempt to close the gap with China.

Chinese models will continue gaining momentum and acceptance among AI builders in the West. As I discussed in my last column, venture capital firm Andreessen Horowitz predicts that 80 per cent of Silicon Valley start-ups are already using Chinese models. While my new venture still uses Google Gemini and Anthropic given our customer base, we’re watching this sentiment shift closely.

The “chip wars” will matter less as better, cheaper open-source models flood the market, reducing the need for best-in-class GPUs. With Deepseek 3.2 benchmarked at 96 per cent cheaper than OpenAI or Google Gemini, the question is whether best-in-class chips are necessary to build most AI applications. They may still be needed for training AI models, but running models for application building and production represents the vast majority of GPU use cases.

Market correction driven by Chinese models and data center overbuild. In my last column, I questioned the necessity of trillion-dollar data center build-outs given orders-of-magnitude cheaper models. What does this mean when, according to many economists, US GDP growth is 100 per cent fuelled by data center investments? I’m not alone: the IBM CEO questioned this last week too.

Europe will emerge as a third pole in AI, not in “regulation” nor in “AI acceleration”, but in old-fashioned trust and a perspective about humanity. Europe’s approach to AI, often caricatured as solely focused on regulation, is rooted in deeper values: privacy, human-centricity and a cautious, thoughtful perspective about AI’s role in society. This isn’t just about slowing down (though the regulation is tedious); it’s about building trust and ensuring AI serves humanity, not the other way around. As the global AI race heats up, this old-fashioned trust and distinct perspective on humanity will position Europe as a critical third pole, offering a necessary counterbalance to US accelerationism and China’s application-driven pragmatism.

AI will continue to develop at breakneck speed in 2026, but the texture of that development will be different. It has to be. AI should not be “done to humans”; it must be developed for humans, amplifying rather than replacing us. As I build my company, invest in others, and watch this space evolve, I’m organising my work around a simple conviction: the most valuable AI applications will be the ones that make us more human, not less. These 12 assumptions reflect that bet. The alternative, AI that serves only the oligarchic few, isn’t just bad business. It’s inhuman.

Dr Lewis Z. Liu is co-founder and CEO of Eigen Technologies

Related posts

Abu Dhabi Finance Week 2025

Scotland fans offered World Cup ticket packages starting at £8,000

AJ vs Paul: Netflix hosting amid WBD takeover a signal for sport