UK’s AI Safety Institute set to open in San Francisco

The UK’s Artificial Intelligence (AI) Safety Institute is set to open its first overseas office in San Francisco this summer, technology secretary Michelle Donelan announced today.

The British government hopes the new office will enable the UK to tap into Silicon Valley’s tech talent and strengthen ties with the US.

“This expansion represents British leadership in AI in action,” said Donelan. “It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety.

“Since the Prime Minister and I founded the AI Safety Institute, it has grown from strength to strength and in just over a year, here in London, we have built the world’s leading Government AI research team, attracting top talent from the UK and beyond.

“Opening our doors overseas and building on our alliance with the US is central to my plan to set new, international standards on AI safety which we will discuss at the Seoul Summit this week,” she added.

The Institute’s London headquarters, with a team of over 30, will continue to expand, the government said.

The world’s second AI Safety Summit kicked off on Monday in Seoul, South Korea. Co-hosted by South Korea and the UK, the two-day event will host global discussions about AI progress.

It comes as the UK AI Safety Institute has released recent safety testing results of five publicly available advanced AI models.

The tests revealed a mixed bag of findings. Several models successfully completed cybersecurity challenges but struggled with more advanced tasks.

While some models demonstrated knowledge equivalent to PhD-level chemistry and biology, all tested models remained highly vulnerable to basic “jailbreaks” and could produce harmful outputs.

The tested models were unable to complete more complex, time-consuming tasks without humans overseeing them.

AI Safety Institute chair, Ian Hogarth said: “The results of these tests mark the first time we’ve been able to share some details of our model evaluation work with the public. Our evaluations will help to contribute to an empirical assessment of model capabilities and the lack of robustness when it comes to existing safeguards.

“AI safety is still a very young and emerging field. These results represent only a small portion of the evaluation approach AISI is developing. Our ambition is to continue pushing the frontier of this field by developing state-of-the-art evaluations, with an emphasis on national security related risks,” he added.

Related posts

Kantar: Private equity groups circle media research firm

Want to tackle addiction? Legalise all drugs

Japanese minister visits Ukraine over North Korean troops