Getting AI policy in the financial sector right

Regulators discuss path forward for AI in finance at 2024 OMFIF seminar

The field of artificial intelligence governance and policy appears to be booming. In the aftermath of large language models’ ascent into the public consciousness, governments began to take seriously the risks associated with AI and the need for guardrails to steer the technology’s use cases. Yet as AI governance grows, the appearance of AI policy-making is often more significant than actual policy commitments.

This was a guiding theme in OMFIF’s AI in finance seminar, which, on 19 March, assembled a group of leading AI innovators and panellists, in partnership with the University of Oxford, to discuss the applications, risks and future regulation of AI in finance. The panellists included Ellie Sweet of the UK Department of Science Innovation and Technology, Paolo Valenziano of Bank of England, Mario Pisani from HM Treasury, Anne Leslie from IBM, Stephen Barr from Microsoft and Robert Trager from University of Oxford. The seminar covered AI applications and risks in the financial sector alongside the effort to create regulatory parameters to guide AI’s use cases.

Although AI use in the private sector is longstanding and rapidly growing, the development of public sector AI infrastructure may be a newer development. Reflecting on Microsoft’s experience of the AI boom, Barr, global director of industry digital strategy, commented that ‘a lot of government departments are looking at AI as the silver bullet for a lot of legacy systems that they’ve had in the past…’ He mentioned that the public sector is seeking to ‘unlock productivity [at] a scale that [they’ve] never been able to before.’

What do these public use cases of AI look like? Valenziano, head of content delivery, discussed the Bank of England’s use of AI to support communications efforts, offering more effective and efficient dissemination of information. He noted that, despite some scepticism of the technology, its usage is widely expanding for many functions. There are obvious risks that accompany this embrace of the technology, however. Valenziano spoke about deepfakes – false AI-generated video or audio recordings – as posing ‘reputational risks’, saying that the Bank is aware of these risks at the highest level of its governance.

Pisani, deputy director of financial stability, in reflecting HM Treasury’s experience similarly emphasised the AI risks that public institutions are working to navigate. He remarked that in the financial sector, ‘so much of what we do is based on trust and confidence… and without explainability, it’s quite hard for market participants to extract signals from what’s happening.’ The issue of explainability – the inability of humans to understand precisely how an AI makes decisions – has been a topic of considerable discussion in the UK policy response to AI. Pisani emphasised the potential for ‘incredible estimated productivity improvements in the banking sector.’ Yet he noted the risks around explainability, as well as the additional macroeconomic risks associated with technology market concentration and ‘trading convergence’.

Then how is the public sector responding to these challenges? Sweet, head of regulatory strategy, digital and tech policy at UK DSIT, has been a part of the UK’s primary policy response to AI and the frameworks that guide it. Speaking on the ‘busy year in the world of AI regulation,’ she argued the principles disseminated in the UK government’s whitepaper in March 2023 laid out a ‘comprehensive regulatory framework for AI.’ The UK response, in Sweet’s view, nestles AI regulation within existing government entities: ‘It’s better to have our existing expert regulators interpret and apply those principles and within their existing remits, rather than necessarily standing up a whole new regulatory framework.’ This echoes the approach Japan has taken in its launch of the AI Safety Institute in February 2024, distributed across governmental departments. Similarly, the UK government is heavily stressing the ‘international debate’ as the government looks to situate the UK as both a technological and normative leader in AI.

Trager, AI governance expert, offered a less sceptical view of the current AI regulatory landscape. Drawing a comparison to car manufacturers regulating themselves, Trager highlighted the danger of allowing AI companies to self-regulate as well as raising a serious question about whether we ‘will be able to move from a statement of principle to practice.’ He references efforts around climate change international co-operation as a prominent example of how some international conversations ‘flounder’. What, then, should make AI international governance different?

A challenging perspective on the broader discourse on AI development was offered by Anne Leslie from IBM, who mentioned that, although there is ‘a tendency to talk about [AI] as though we must not stifle innovation, that is a choice, not a natural law.’ Her commentary on the likely socioeconomic consequences of broad AI dissemination in the economy – including market concentration, labour displacement and inequalities – was met with broad agreement among participants.

The dissonance between collective agreement on AI risks, alongside the relatively tepid regulatory frameworks, underscores an inherent conflict in AI development. The technology is vital to future economic growth, military power and competitiveness of national economies. So while the risks of AI may be tremendous, the arms race to become a leader in AI creates strong incentives to sacrifice safety for innovation.

Julian Jacobs is Senior Economist, Digital Monetary Institute, OMFIF.

Join Today

Connect with our membership team

Scroll to Top