Despite his ambitions to transform Britain into a global science and tech superpower, former Prime Minister Rishi Sunak fell short on a critical front: regulating artificial intelligence.
Now it’s Keir Starmer’s turn. The Labour leader, who has so far been relatively muted on the subject, is expected to introduce an AI bill to parliament today during the King’s Speech.
As outlined in its election manifesto, Labour wants to “ensure the safe development and use of AI models by introducing binding regulation on the handful of companies developing the most powerful AI models”.
Read more
What’s in Labour’s tech manifesto? Data centres, start-ups, AI and a national data library
“We are nowhere near where we need to be on the question of regulation,” said Starmer last year. He has promised a “stronger” approach to AI regulation than that of the Conservatives – a low bar, considering Sunak’s stance that it is too early to legislate for this emerging technology.
Sunak advocated for a light-touch approach, believing minimal regulations would attract companies and investors to do business in the UK.
While most companies do not want cumbersome rules, they do want clarity on the regulatory front, which is essential to provide businesses and consumers with the certainty they need to make investment decisions.
Knowing that regulation is on the horizon, both in the UK and abroad, many businesses currently say they feel stuck, fearful that new potential AI inventions or applications could breach incoming rules and lead to fines.
According to compliance platform Vanta, over half of UK business leaders say they are more likely to invest in AI once its use becomes regulated.
Sunak’s approach to regulating AI was to leave much of the decision-making to individual sector regulators, potentially leading to a “patchwork” of disparate approaches, legal uncertainty and loopholes, according to Simon Colvin, technology partner at Pinsent Masons.
But Starmer must tread a fine line between creating firm but fair rules for AI companies.
Companies are also nervous that any “binding regulations” would be too strong.
In contrast to Colvin, Eleanor Lightbody, chief executive at London-based legaltech company Luminance, warned: “A one-size-fits-all approach to AI regulation risks being rigid, and given the pace of AI development, quickly outdated.”
Lord Chris Holmes, who led a regulatory bill for AI in parliament before the general election, told City A.M.: “There’s always a risk that, if it swings too far the other way [and the rules are too restrictive], that it won’t enable what we all should be looking to achieve, which is pro innovation, pro citizen rights, pro consumer protection, and it has to be that we consider all of those three elements when we seek to legislate.”
While questions still hang over Starmer’s broader attitude towards AI, he must capitalise on the current momentum and build on Sunak’s ‘superpower’ agenda.
Getting the balance right in this AI bill will be challenging, but after two AI safety summits, failed talks on protecting intellectual property rights from the rise of AI, and much back-and-forth, Britain’s new pro-business Prime Minister should not drag his heels on the issue.