Home Estate Planning AI deepfake scams hit FTSE 100 bosses

AI deepfake scams hit FTSE 100 bosses

by
0 comment

A surge in deepfake technology is helping fraudsters impersonate FTSE chief executives with alarming ease, leading to a rise in sophisticated “CEO scams” that often go unreported.

At least five FTSE 100 companies and one FTSE 250 firm have fallen victim to these kinds of attacks this year. High-profile targets include Octopus Energy, discoverIE and the chief executive of WPP, who revealed in May that his voice was cloned on a company call.

However, the actual number of so-called CEO scams is believed to be significantly higher, according to The Times. Other companies admitted either experiencing these attacks or actively monitoring the growing threat.

Scams often involve requests for payments or sensitive or financial information. The modus operandi typically involves messages or emails instructing recipients about an imminent confidential acquisition requiring an urgent fund transfer.

An image – often sourced from the company’s official website – can accompany the message and the scam can escalate with a computer-generated voice note.

Nick Jefferies, chief executive of electronics manufacturer discoverIE, told the Times his voice had been deepfaked. He said: “It does sound like me,” he said, “whether it was AI or just edited words, I don’t know.”

While it is not a new problem – national body Action Fraud released a warning about CEO scams in 2016 – the extent of the problem is hard to measure as companies seek to avoid embarrassment or damage to reputation.

What’s more, many of these kinds of crimes go unreported. According to the Crime Survey of England and Wales, only 13 per cent of all fraud cases get reported to Action Fraud or the police.

CEO scams are also likely on the rise due to artificial intelligence. Jake Moore, the global cybersecurity advisor at ESET, has said that a large reason for the rise in attacks is because of the ease and speed at which AI can help fraudsters develop deepfakes: “As AI steps up offering quality voice cloning techniques and deepfakes, due diligence and more multi layered authentication protection is imperative moving forward,” he warned.

David Sancho, a senior threat researcher at cybersecurity company, Trend Micro, said: “With the advent of video creation of decent deepfakes this is ramping up. No wonder that these criminals [are] jumping onto this trend.”

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?