Technological transformation in the financial sector has taken root. According to a report by Nvidia, 91% of financial services companies are either assessing artificial intelligence or already using it in production – from portfolio optimisation and fraud detection to generative AI workflows, such as report generation and investment research.
However, research and policy analysis of AI’s risks to the economy remain underexplored. There is growing sentiment among regulators about the need to get AI’s more diffuse role in global investments right.
As US Securities and Exchange Commission Chair Gary Gensler commented, ‘It is likely that regulatory gaps have emerged and may grow significantly with the greater adoption of deep learning in finance… [meaning that] deep learning is likely to increase systemic risks.’ This should be of particular concern now with the increasing use of AI in finance.
Threat of further flash crashes
Algorithmic convergence is a well-understood phenomenon among high-frequency trading firms. By synchronising trading strategies and behaviours across algorithm-led financial entities, financial firms have been able to successfully perform trades in fractions of seconds, capitalising on tiny fluctuations in asset prices. This is great for firms when it allows them to perform transactions no human can. Yet it creates risks when algorithms behave recklessly, as was observed in the 2010 flash crash, where major US equity indices experienced a sudden plunge and rebound, causing the Dow Jones to drop almost 1,000 points in a matter of minutes.
In 2024, there is now a risk that issues of similar convergence in market behaviour will emerge in not only the HFT world but global finance more broadly. AI in finance, is of course, nothing new. Yet there is a current surge of investment in AI, onshoring of AI tools (such as large language models) and active deployment of AI to help firms make more informed decisions about asset allocation. This surge should urge both the financial sector and regulators to think very seriously about the hazards emerging as firms converge in their financial market decision-making.
These concerns have been thoroughly described by Gensler in recent years. Gensler has commented, for example, that ‘herding and crowding in high-frequency algorithmic trading is partially responsible for causing flash crashes’ and that homogeneity in investment and model strategies can emerge from the people developing these models having ‘fairly similar backgrounds’ and being ‘trained together: the so-called apprentice effect’.
As a result, AI models may come to dominate firms’ asset allocation and decision-making, leading to convergence as AI systems are trained on the same (or very similar) massive training sets. This collectivises financial returns, reducing volatility. Yet it also collectivises risk. This has the potential to amplify systemic hazards. A report by the International Organization of Securities Commissions, for instance, highlighted that algorithms operating across markets could rapidly transmit shocks from one market to another, thereby increasing systemic risk.
Frameworks and possible guardrails
Since algorithms react instantaneously to market conditions, they may widen bid-ask spreads or halt trading during tumultuous markets, diminishing liquidity and making periods of uncertainty more violent and tumultuous. In light of this, regulatory bodies like the SEC and European Securities and Markets Authority are crafting frameworks to ensure that implementations of AI do not adversely impact financial stability.
As for what these measures should look like, one approach would be to increase liquid capital requirements for financial institutions when they choose to use AI systems. Another would be to enact small taxes on transactions, which would stamp out the high-frequency trading market. Regulators could also assess algorithmic risks across the entire lifecycle, from input data to algorithm design and output decisions. Still, not all AI systems are made equally.
How should regulators evaluate those with particular risks of ushering in toxic market dynamics, when the innovators who create the AI systems themselves don’t understand how AI systems think? As Gensler remarked, ‘If deep learning predictions were explainable, they wouldn’t be used in the first place.’ Furthermore, if regulators take too heavy-handed an approach, they may encourage the same kind of model homogeneity that they seek to avoid, as algorithms would have even less room to produce divergent results.
Policy-makers must be cautious about the damage emerging from inaction. Yet, they should also recognise the capacity of AI regulations to control – and to itself be a source of – market risk. Early initiatives by the European Union to create legal frameworks are the first meaningful step to generating global AI governance regimes. Their labelling of ‘high-risk’ AI use cases in finance – such as in credit scoring – is possibly a good start. Yet more needs to be done.
While policy-makers cannot – and should not – halt AI growth, they may have a fighting chance to slow down the speed at which AI systems are allowed to interact with the financial market. Regulators should act now to create guardrails for a technology that is quickly becoming too smart, swift and adaptable for its (and our) own good.
Julian Jacobs is Senior Economist at OMFIF.