The UK government is reportedly starting to speed up its approach to artificial intelligence (AI) as it looks to create greater protections around the emerging technology.
New legislation would likely limit the production of large language models such as ChatGPT and Google’s Gemini, according to the Financial Times, which cited two people briefed on the plans.
While nothing is finalised yet, the law would require developers of the most advanced AI models to share their algorithms with the government and prove they have safety tested.
“Officials are exploring moving on regulation for the most powerful AI models,” said one of the people briefed on the situation. The Department for Science, Innovation and Technology (DSIT) is “developing its thinking” on what AI legislation might look like, they added.
Another unnamed source specified the rules would apply to the technology that sits behind AI products rather than the consumer-facing applications. No law will be introduced imminently, the people said.
City A.M. has approached DSIT for comment.
Last week, the Competition and Markets Authority (CMA) said it is concerned about the concentration of market power among a handful of technology giants that produce the largest AI models.
The government, which has previously described AI as a potential “existential threat”, has also been facing pressure from an increasing number of MPs, frustrated with Prime Minister Rishi Sunak’s slow approach.
He has previously said that “the UK’s answer is not to rush to regulate”.
Lord Chris Holmes, an advocate for the use of technology for public good, has been at the forefront of a charge to encourage the government to take a more active approach to regulating AI.
“We’re building a raft of support and people behind that perspective at the moment,” he recently told City A.M.
It comes a month before Korea is due to hold the world’s second AI safety summit, after Britain held the first one in November last year. It is set to be held in Seoul on 21 and 22 May.