The UK’s data watchdog chief is set to warn tech businesses that they must “bake in” data protection at every stage of developing artificial intelligence technologies in order to properly protect people’s personal information.
John Edwards, UK Information Commissioner, will deliver the warning as part of a speech about privacy, AI and emerging tech this afternoon.
He will tell an audience of tech leaders at the New Scientist Emerging Technologies Summit: “As leaders in your field, I want to make it clear that you must be thinking about data protection at every stage of your development, and you must make sure that your developers are considering this too.
“We call it data protection by design and default. Protection for people’s personal information must be baked in from the very start. It shouldn’t just be a tick box exercise, a time-consuming task or an afterthought once the work is done.”
In 2023, over 3,000 cyber breaches were reported to the Information Commissioner’s Office, which regulates the collection and use of personal data.
The watchdog has been consulting on generative AI, exploring questions such as how much control are people willing to give up to use AI, do people know how much data they are sharing and can businesses do more to educate users.
It says that where AI uses personal data it falls within the scope of existing laws that govern data protection and transparency. This includes the use of personal data to train, test or deploy an AI system.
Earlier this year, the British government announced it is spending £10m preparing regulators to deal with new technology.
In a bid to shift away from being “the regulator of no”, the ICO has recently launched an AI and digital hub designed to help and support innovators and businesses with free advice on complex regulatory questions.