Home Estate Planning Is your tech strategy good, or just AI theatre?

Is your tech strategy good, or just AI theatre?

by
0 comment

Artificial intelligence has become the business world’s favourite performance recently. FTSE 100 earnings calls now mention AI more than inflation, and investor briefings increasingly resemble tech expos. Analysts report that AI references in UK company filings have tripled in the past year, yet the results aren’t matching the rhetoric. Markets are beginning to glance sideways. AI is, for a lot of companies, buying time for results that may never arrive. How do you know if your AI strategy is theatre before it’s too late? 

New data from the Office for National Statistics highlights how shallow much of the progress really is. Just 23 per cent of UK businesses say they are using some form of AI technology, and only four per cent of those report a reduction in headcount as a result. The figures sound benign, but they reveal an awkward truth. AI adoption is widespread enough to be fashionable but not deep enough to be transformative. The corporate world is talking about revolution while quietly maintaining business as usual. Many executives assume that adopting AI means installing tools, not redesigning work, there’s more than a little whiff of waiting for someone else to tell them what to do. Probably why the big tech folks are having such an easy time right now. The question is no longer who has AI, but who is using it well.

The myth of the productivity miracle

The much maligned and misunderstood MIT study published earlier this year illustrates how easily AI data gets distorted. The research found that generative AI could boost worker productivity by up to 59 per cent in specific tasks such as customer support. The finding spread across headlines and investor presentations as proof that AI was an economic saviour. The nuance was lost. The productivity gains applied to narrow, text-based tasks performed by trained professionals in a controlled environment. For more complex or ambiguous work, performance gains were negligible or even negative. Details like that matter.

Productivity increases when AI complements expertise, not when it replaces it. In many workplaces, the opposite is happening. Employees are handed generic AI tools and told to “experiment”. Knowledge flows become fragmented, oversight weakens and the promised efficiency gains evaporate into untraceable clicks. Without clear frameworks for use, AI can just as easily create noise and duplication as it can reduce workload.

The progress illusion

AI strategies fail because most businesses are confusing adoption with transformation. Installing a chatbot or automating reporting does not make an organisation AI-driven. A genuine strategy requires integration, not just experimentation. Think reimagining workflows, retraining teams and reshaping governance around continuous learning. Put another way, it takes a ton of work, and most are just getting approval for Enterprise ChatGPT. The companies seeing the biggest returns are using AI to strengthen decision-making, forecast risk and personalise service, not simply fire fragile meatsacks

Businesses that claim to be “deploying” AI but cannot explain what data it relies on, how outputs are validated, or who owns the process are not innovating, they are gambling. What happens when people gamble and lose? Wasted investment, reputational damage and compliance exposure. So why are so many just going for AI theatre instead of treating AI like a fun innovation play?

Strategy over spectacle

A good, non-theatrical, AI strategy increasingly has four defining features: clarity of purpose, governance, adaptability and accountability. Clarity means knowing what problems the technology is solving and why it matters to customers. Governance means creating accountability for how models are trained, validated and deployed. Adaptability means treating AI as an evolving capability, not a fixed asset. Accountability means hard KPIs are buttoned down up front. 

My red flag alarms start going off when the board cannot explain the data it is using. Many executives still rely on vendors for understanding, trusting models they do not control and data they have never seen. Another big red flag is when procurement outruns policy. Buying systems before defining any objectives or ethics just creates technology debt that no amount of slogans is going to make any sexier. Activity over outcomes is another red flag. If your employees can tell you the number of AI projects but not the outcomes, it’s time to press pause and ask why three times. Leaders have a lot to answer for here, think about how your leadership communication has been in the last six months. 

The reputational risk hiding in plain sight

Reputation will become the next frontier of AI risk. As tools like OpenAI’s Sora 2 or Anthropic’s Claude 3.5 enter everyday use, the boundary between legitimate creativity and intellectual property infringement is collapsing and big tech can afford the lawsuits. The companies that fail to build compliance into their workflows may soon discover that an errant marketing video or unlicensed data source can trigger both lawsuits and public backlash. The lesson from Big Tech’s own pattern of billion-dollar settlements is clear: expect you likely can’t afford the legal bill.

Insurers are already reacting. Several major underwriters in the London market are preparing exclusions for AI-related claims, including copyright breaches and data misuse. Businesses that cannot demonstrate internal controls will face higher premiums or outright refusals of coverage. The financial risk of poor governance is no longer hypothetical so check your clauses in case Santa brings you a lump of coal. 

What leaders should do next

Every board should begin with a hard audit of what their AI systems actually do. The first question is not “how much” AI is being used but “to what end”. What decisions are being influenced, and how are they verified? Who owns the outputs? Who is accountable when they fail? Without answers to those questions, no AI strategy can be credible.

The next step is education. Senior leaders cannot outsource understanding to technical teams. AI fluency at the top is now as essential as financial literacy. Training programmes for top brass, not just analysts, should be part of any transformation plan. 

The final step is understanding measurement. If your AI strategy cannot show clear productivity gains, better decisions or reduced risk is not a strategy – congratulations, you’ve bought some brilliant performance art. The real challenge for the next year will not be adoption, but alignment. AI has got to align with business objectives, customer trust, and ethical standards. Anything less is noise and signals to the market there’s problems. 

Investor patience is already thinning. Analysts at several leading banks have begun privately flagging “AI overclaiming” as a reputational risk in earnings reports. Companies that promise transformative impact without evidence are quietly being marked down. The next phase of the AI boom isn’t going to reward those who talk the most about the technology but those who can prove it’s creating measurable value and they know why they’re doing what they’re banging on about. Investors, like customers, are starting to demand substance over spectacle.

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?