Home Estate Planning EU AI Act will set the global standard, so British businesses must keep up

EU AI Act will set the global standard, so British businesses must keep up

by
0 comment

History tells us that the EU’s AI Act will likely set the tone for global standards; UK companies who get ahead won’t regret it, writes Nick White

In early December 2023, a significant milestone was achieved in the realm of artificial intelligence (AI) regulation when the European Council and European Parliament provisionally agreed on the text of the EU’s new AI Act. Subsequently, in late January, versions of the “final” text were leaked online. While some minor fine-tuning remains, the core content has been settled, marking a critical juncture in how the tectonic plates of global AI regulation may settle.

The Brussels Effect

One of the most intriguing aspects of the EU’s AI Act is its potential to trigger what has become known as the “Brussels Effect”, the idea that EU regulations often become de facto global standards. It became prominently discernible through the General Data Protection Regulation (GDPR), which following its introduction became a global yardstick of data protection regulation.

The key to the Brussels Effect in the case of the GDPR lay in the combination of the EU’s power and influence internationally, the GDPR’s status at the time of its enactment as the most comprehensive  piece of data protection legislation in the world and the regulation’s extra-territorial reach. The GDPR not only applied to EU-based businesses but also to entities outside the region that offered goods and services to EU residents or monitored their behaviour. This international reach was perceived by some nations, including China, as assertive and led to countermeasures there in the form of their own blocking legislation.

The AI Act looks to tread a similar path by extending its influence beyond the borders of the 27 EU Member States. Central to this is the AI Act’s extraterritorial application, outlined in Article 2, which mandates compliance from any AI system provider or deployer, regardless of their location, if the “output produced by the system is intended to be used” within the EU. 

Whilst the specific interpretation of “output” remains ambiguous and awaits further clarification, this uncertainty will likely motivate non-EU entities to err on the side of caution and opt for compliance with the AI Act to navigate this ambiguity safely.

AI Act’s Approach

The AI Act categorises AI systems by risk levels, with “high-risk” systems subject to stringent obligations. High-risk applications encompass areas like safety-related functions, biometric identification, employment and law enforcement, among others. These obligations include adherence to data governance standards (Article 10), transparency guidelines (Article 13) and the incorporation of human oversight mechanisms (Article 14).

The global business community is likely to perceive alignment with these robust EU standards as advantageous due to the sheer size of the EU market and the operational complexity associated with maintaining varying standards across different regions. Furthermore, the AI Act’s emphasis on protecting fundamental rights, including non-discrimination (Article 5) and privacy, resonates with the prevailing global discourse on AI ethics. Companies worldwide are expected to view these standards as ethical benchmarks rather than mere regulatory requirements.

UK-specific implications

Brexit has created a unique scenario for the UK regarding AI regulation. While the UK is no longer part of the EU, its businesses and entities engaged in AI-related activities will still be affected by the AI Act if they intend to operate within the EU market, which the vast majority do.

Moreover, considering the AI Act’s likely emergence as the de facto global standard, there is a good probability that the UK’s own domestic regulations will strive for a form of alignment. While the Act will not be enforceable until two years after its adoption, businesses both within the EU and the UK would be well-advised to learn from the experiences of GDPR implementation and strive for early compliance rather than delaying until the eleventh hour.

This could mean many things for British companies depending on the industry they are in and how they use AI technology. There will however be some universals: those who prioritise having clean and accurate datasets, who have human oversight of AI-driven processes and who test those processes for bias, adjusting accordingly, will end up ahead of the pack. 

It is conceivable that various regions adopt diverse AI compliance strategies. However, in light of the EU’s significant regulatory influence, the comprehensive scope of the legislation, and its extraterritorial impact provisions, universal alignment with the act seems increasingly probable. Just as we observed the profound impact of the GDPR, the AI Act is poised to usher in a new era of global AI regulation and governance.

You may also like

Leave a Comment

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?