Home Estate Planning British terrorism reinsurer warns extremists are ‘actively experimenting’ with AI to carry out attacks

British terrorism reinsurer warns extremists are ‘actively experimenting’ with AI to carry out attacks

by
0 comment

A British reinsurance company against terrorism risk has warned violent extremists are “actively experimenting” with artificial intelligence (AI) to carry out attacks, spread propaganda and radicalise victims.

AI is likely to offer significant benefits for [terrorists and violent extremists] in support of their operations and activities, including the planning, facilitation, and execution of violent attacks,” said a new report published by Pool Re.

The report, written by Dr Simon Copeland, a terrorist research fellow at the Royal United Services Institute, explored the potential current and future risks from the misuse of AI by “nefarious actors”.

It comes as the current UK terrorism threat level is ‘substantial’, indicating that an attack is likely. In November 2020 it was ‘severe’ which means an attack is highly likely, then in Feb 2021 it was lowered to substantial, by The Joint Terrorism Analysis Centre. It has switched between severe and substantial size times since 2019.

In addition to violent attacks, it also said AI is likely to provide new ways for terrorists to spread “targeted” propaganda, control “armies of bots” and evade detection systems used by social media platforms.

Terrorism sympathisers have recently begun using AI to create optical illusions as one method of distributing propaganda, the report said.

These images appear to depict typical scenes such as a cityscape but, when viewed from a distance, they reveal hidden symbols or subliminal messages, like the image of a terrorist leader or ideologue.

Pro-Islamic State actors have already used bots to amplify propaganda online and translate Arabic messages into several languages simultaneously. AI powered tools are allowing bot accounts to interact with netizens (online users) in an increasingly human-like manner.

Additionally, a combination of AI generated images, audio and video deepfakes impersonating leaders or high profile members of terrorist and extremist groups, along with AI chatbots, trained on the often highly discriminatory and extremist online content, may result in cases of radicalisation.

“For individuals spending significant time generating or consuming AI content or interacting with chatbots, there is a risk that being continuously exposed to such discriminatory content may begin to impact their beliefs and even potential susceptibility to extremist narratives,” explained Pool Re’s report.

The firm is a reinsurance pool in the UK that provides terrorism insurance coverage. Almost all major insurers in the UK are members of Pool Re.

It allows insurers to provide terrorism coverage by pooling premiums, sharing risks and relying on an unlimited government backstop to guarantee claims payments in the case of catastrophic losses.

In return for this guarantee, Pool Re pays an annual premium to the government. It was established in 1993 in response to the increasing threat of terrorism and the lack of available insurance for such risks.

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?