The FCA’s refusal to regulate AI is pushing the problem onto City firms

The FCA claims it does not regulate technology, only outcomes, but this line of thinking can only go so far with AI, writes Omar Salem

“Read my lips: no new taxes” said George Bush senior as he accepted the Republican presidential nomination in August 1988. A couple of years later, he signed off tax rises of $146.3bn (which was a lot of money in those days). 

Last year, the FCA mouthed its own promise: no new AI regulations. The FCA’s vow will surely also, in time, have a collision with reality. The reason is simple: AI is a profound new technology that will uniquely and radically impact almost every aspect of financial services. 

The FCA has espoused the principle of “same risk, same regulatory outcome”, yet it is clear that AI poses unique risks (while offering many opportunities). The FCA cannot credibly say it has considered every area of regulation and decided that not a single drop of AI specific regulation is needed. In fact, its joint discussion paper with the PRA suggests the contrary. 

The FCA and PRA say that AI trading could make markets vulnerable to flash bubbles or crashes, emphasise the importance of humans being involved in AI decision-making and sound the alert that AI could be used to exploit consumer behavioural biases, such as inertia in changing products. 

The FCA claims that it does not regulate technology but focuses on outcomes. The only problem with this is, hold the front page, the FCA does specifically regulate lots of technologies, including algorithmic trading, cloud-based outsourcing, social media and IT security. It is not clear why the FCA thinks AI is so different. 

While the FCA has said that it does not plan to introduce extra regulations for AI and will instead rely on existing frameworks, this leaves the firms it regulates in a dilemma. The FCA regulatory framework includes high-level principles, such as having adequate systems and controls. If firms breach these they can be subject to enforcement action and fines. The FCA has said it will apply its existing rules to AI but what it expects firms to do in practice is opaque. The risk is that when things go wrong, firms are judged by the regulator with the benefit of hindsight.   

Getting the regulatory porridge just right

It is an iron law of regulation that if a regulator produces high-level principles, people complain that more detailed guidance is needed. However, if the regulator produces detailed rules, there will be objections that they are too prescriptive. Getting the regulatory porridge ‘just right’ is hard. 

The assumption that one of either high-level or detailed rules is always better is mistaken. High-level rules may provide flexibility and be less of a burden, but more detailed rules provide reassurance that if you follow them then you will be compliant. Bad regulation is bad for business but so is regulatory uncertainty.  

The effect of the FCA’s approach has been to push the burden on to firms to go through the FCA handbook and rules and try to work out how they apply to AI. One example of how this regulatory void is playing out is the consumer duty – the FCA’s requirements regarding delivering good outcomes for retail customers. The FCA has warned about the risks of AI to consumers but its recently published focus areas for the consumer duty do not mention artificial intelligence. This is despite the scope for AI to have a huge impact on everything from car insurance to financial advice. 

A likely side effect of the FCA’s approach is not that there will not be new regulation of AI but that regulation will not be transparent. The risk is that the FCA applies inconsistent or sub-optimal requirements for AI when authorising new firms or supervising those it already regulates. 

There is also a risk that the FCA fails to act proactively to support the deployment of AI for the benefit of consumers. While it has set up an AI Live Testing Service to support firms to test their AI models in a controlled environment with feedback from the FCA, this is limited to a small number of larger firms.

Take financial advice, where AI has the potential to close the ‘advice’ gap and make high-quality financial advice accessible to many who need it. Despite the scope for ‘robo-advice’ having been discussed for years, all the hype is yet to translate into these products being widely available. A clearer regulatory framework might help more firms bring these products to market. 

Although the FCA has vacated the field on AI, and created a heap of uncertainty in doing so, it has left scope for City firms to shape their own approach. That will allow firms to tailor their approach to the nature of their business and governance. It may also be that different sectors develop their own standards to manage the risks they face. In short: don’t wait for the right porridge – make your own, if you have to.

Omar Salem is a financial regulation partner at Fox Williams LLP

Related posts

United Against Online Abuse Welcomes 5th Scholar to Fully Funded Research Programme

No selfies please: Croatia has a quiet luxury island that’s more Succession than Kardashian

Fitch Learning Completes Acquisition of Moody’s Analytics Learning Solutions and the Canadian Securities Institute