Can Britain’s AI growth lab speed up innovation without risking safety?

The UK government’s recent unveiling of an AI ‘growth lab’ – a sandbox designed to let companies trial AI under relaxed regulatory conditions – has been hailed as a bold step to accelerate innovation.

At the Times Tech Summit on Tuesday, tech secretary Liz Kendall dubbed it as a chance to remove the “needless red tape” while safely testing technologies that could make public services more efficient, and businesses more competitive.

The sandbox is intended to allow firms to pilot AI products that may otherwise be constrained by outdated rules, creating a controlled environment where some regulations are temporarily modified.

Early targets include housing approvals and NHS patient care – areas where AI could speed up processes and reduce bureaucracy.

Kendall emphasised the potential to “fast-track responsible innovations that will improve lives and deliver real benefits.”

Speed and safety

While the promise of faster innovation entices investors and startups alike, others are already warning of the risks.

Nik Kairinos, chief executive of AI safety monitoring platform RAIDS AD, warns that “speed must be balanced with safety and foresight, as today’s innovation can all too quickly become tomorrow’s problem”.

He points to the recent Amazon Webs Services (AWS) outage, which affected millions of users worldwide, as an example of just how complex systems can falter unexpectedly.

Bernhard Maier, partner at law firm Browne Jacobson, adds that sandboxing “makes perfect sense in some contexts, and in others it may be inappropriate or dangerous. It is a delicate balance to be struck.”

For those in favour, the sandbox is a logical, pragmatic response to an AI landscape that has long been constrained by regulations written before machine learning and autonomous systems even existed.

It draws inspiration from previous initiatives like the FCA’s ‘innovate sandbox’ and the MHRA’s ‘AI airlock’ which have been used to test technology in controlled environments.

TechUK’s deputy chief executive Anthony Walker labelled the plan a “strong step” toward accelerating adoption while managing risk, calling it a way to “trial and test technologies in the real world – and do that safely”.

Government red tape

Yet the announcement lands against a backdrop of persistent scepticism over the government’s broader regulatory ambitions.

The recently published Regulation Action Plan aims to cut the burden on businesses, but according to a briefing by Robert Colvile, director of the centre for policy studies, it rests on flawed assumptions.

Originally, Prime Minister Keir Starmer promised to reduce compliance costs for businesses by 25 per cent.

Colvile notes that the government has abandoned that target, instead aiming to save £5.6bn a year on a narrower measure of “administrative cost”.

In practice, this underestimates the real burden on firms. The calculation assumes the average wage for staff time lost to regulation, ignoring that compliance duties typically fall on highly paid directors or managers.

Moreover, surveys cap reported compliance time at “more than 50 days” per month per firm, which fails to capture the cumulative work of large teams.

“Ministers are boasting about their commitment to cut regulatory costs when they do not even have credible data on the cost of regulation to business”, Colvile said.

He specifically calls out the Employment Rights Bill, whose impact assessment was deemed “unfit for purpose”, arguing that a proper costing is urgently required.

Innovation or oversight?

On one hand, the AI growth lab offers a structured way to experiment with cutting-edge technology and reduce bureaucratic friction.

On the other, broader deregulation initiatives risk being symbolic rather than substantive, relying on metrics that underplay the true impact on businesses.

The challenge seems that while speed without oversight risks embedding unsafe AI applications or entrenching biases, over-cautious regulation risks stifling technological progress and economic growth.

As Kairinos argued, the goal must be to pair innovation with “robust safety checks, diverse perspectives, and long-term foresight to ensure that progress does not come at the cost of stability, fairness and safety.”

Public appetite for oversight is strong, with recent polling by YouGov suggesting that nearly four in five Britons support creating a UK AI regulator.

The same report found that 96 per cent backed audits of powerful systems and 90 per cent wanted pre-approval before frontier AI models are trained.

Meanwhile, only nine percent trust tech executives to safeguard safety independently.

The AI growth lab, then, sits at a crossroads: it could either mark the UK’s emergence as a global hub for AI innovation, or become another case study in government rhetoric outpacing measurable reform.

If it succeeds, it will be by ensuring that experimentation is genuinely controlled, safety standards are non-negotiable, and that the broader machinery of deregulation is grounded in accurate, credible assessments, rather than hitting empty targets.

Related posts

First Trust Global Portfolios Management Limited Announces Distributions for certain sub-funds of First Trust Global Funds plc

First Trust Global Portfolios Management Limited Announces Distributions for certain sub-funds of First Trust Global Funds plc

First Trust Global Portfolios Management Limited Announces Distributions for certain sub-funds of First Trust Global Funds plc