MPs push for stricter AI rules as UK lags

Over 100 UK parliamentarians have called on the government to tighten the reigns on AI regulation, warning that ministers risk letting the technology race ahead unchecked.

The cross-party push, led by figures including former defence secretary Lord Des Browne and ex-environment minister Lord Zac Goldsmith, argues that superintelligent AI could become “the most perilous technological development since nuclear weapons”.

Put forward by Control AI, whose backers include Skype co-founder Jaan Tallinn, the campaign is urging Prime Minister Sir Keir Starmer to resist pressure from across the US where the White House has long opposed strict AI regulation.

“Even while very senior figures in AI are blowing the whistle, governments are mile behind the companies”, Lord Goldsmith said.

The warning comes as Silicon Valley experts signal the stakes are rising at an exponential rate.

Jared Kaplan, chief scientist at Anthropic, said humanity may have to make an existential choice by 2030 over whether to let AI systems train themselves to grow even more powerful.

Calls for red tape

The UK hosted a safety summit in 2023, where it established a security institute and highlighted the risk of “serious, even catastrophic harm” from advanced AI.

But since then, the UK has been criticised for doing little to implement international cooperation or binding controls.

The government’s approach so far has relied on existing sector regulators rather than concrete statutory measures.

The Department for Science, Innovation and Technology (DSIT) has insisted the tech is already regulated, saying its framework can keep pace with technological advances.

But this risks leaving the country trailing AI companies that continue to develop frontier models with minimal oversight, critics argue.

Former AI minister Jonathan Berry has called for binding global rules with “tripwires” for highly powerful models, ensuring they are tested and equipped with off switches.

Andrea Miotti, Control AI boss, said: “AI companies are lobbying governments to stall regulation, claiming it would crush innovation – some of the same companies who say AI could destroy humanity. It’s quite urgent.”

Meanwhile, domestic concerns have escalated, with technology secretary Liz Kendall saying this week she is “especially worried” about chatbots and generative AI being used by teenagers, prompting a review of gaps in the Online Safety Act.

Ofcom is being asked to clarify its expectations for AI platforms, with new public guidance and age-verification measures expected next year.

On the other hand, business adoption continues apace.

Research from consultancy Elixir shows nearly half of UK firms now funnel large amounts of investment into AI, with early adopters reporting significant cost savings.

But persistent fragmentation and a lack of clear strategy leave many companies exposed, while the FCA maintains its preference for collaboration over rushed legislation.

Related posts

Reynolds and Mac sell stake in Wrexham AFC to major sport investors

Anglo American axes bumper pay plan ahead of vote

Octus Unveils Sky Road Research Management, Uniting Credit Intelligence and Portfolio Management Seamlessly into a Single Platform