Meta president Nick Clegg says AI can’t be kept in the ‘clammy hands’ of Big Tech

Artificial intelligence (AI) needs to be democratised so it does not just remain in the “clammy hands” of Big Tech companies based in California, the president of Meta has said.

Former British Prime Minister Nick Clegg has said that a world in which technology is “kept in the clammy hands of a small number of very, very large and well-heeled companies in California” is “not going to stick”.

He asked: “Can anyone conceive of a world in which AI technology… which is going to be the means by which we communicate with each other in the online world for generations to come, should only be determined and shaped by a small number of companies who’ve got the GPU capacity, the deep pockets, the data to build the underlying technology?”

“I just think that’s self-evidently not going to stick,” Clegg said on Tuesday, addressing a room of journalists and industry at the London offices of Meta, the company behind Facebook, Instagram and WhatsApp.

During the event, Clegg also revealed that Llama 3, Meta’s newest large language model (LLM), will take the form of a “suite” of next-generation models with different capabilities.

They will be rolled out successively over the course of this year but starting within the next month, or even sooner. “It’s not a one and done hey presto moment,” he added.

Nick Clegg (Image: PA)

Last year, Meta unveiled Llama 2, a large language model that can generate text.

Alongside Clegg on the panel sat Meta’s chief product officer, Chris Cox, its chief AI scientist, Yann LeCun, and vice president of AI research, Joelle Pineau.

Pineau said Meta has also developed a speech generation tool but this remains behind closed doors. “We decided after a good amount of reflection not to release the model openly because right now the potential for misuse of voice generation technology is too great,” she explained.

OpenAI, the creator of ChatGPT, has also developed a speech generation tool and has similarly withheld it from general use.

But LLMs like Llama 2, ChatGPT and Google’s Gemini have come under fire from regulators across the world, who are concerned with the potential dangers of the emerging technology that has been made publicly available.

They are attempting to get ahead of tech companies and pre-empt any disasters. The EU’s landmark AI Act is set to enter force this year.

In the UK, an increasing number of MPs are growing frustrated over the government’s pro-innovation stance that critics say lacks urgency and fails to provide concrete regulatory frameworks.

AI ‘safety institutes’ have been set up in the UK and US to test the biggest and potentially most harmful AI systems, known as foundation models.

Clegg said regulators do need to develop guardrails in parallel with technological advances but he described the current plans as “crude” and “not the most rational”.

“At the moment, the regulators are working on a rather crude rule of thumb that if future models are of [a] particular size, or surpass a particular size… that therefore there should be greater disclosure than models under that [size].

“I don’t think anyone thinks that over time that’s the most rational way of going about things because, in the future, you will have smaller fine-tuned models aimed at particular purposes that could arguably be more worthy of greater scrutiny,” he added.

Another concern is over how AI could impact democracy in the world’s biggest election year ever. Meta says it has detected very little activity trying to subvert or disrupt elections over its platforms so far.

“It is just striking, so far, that the trends – they may change dramatically – but so far, have not suggested something wildly out of the ordinary,” said Clegg.

Related posts

Kantar: Private equity groups circle media research firm

Want to tackle addiction? Legalise all drugs

Japanese minister visits Ukraine over North Korean troops