Government institutions are generally not the entities that usher in cutting-edge technological use cases. Yet the promises of artificial intelligence for central bank operating models are steadily emerging. Chief among them are the use cases of AI for identifying anomalies in data, developing more robust macroeconomic projections and deploying unstructured data.
While the benefits are clear – increased efficiency and accuracy – they carry risks concerning data privacy, data quality and algorithmic convergence. To confront these challenges, it is vital that AI’s risks are well understood and accounted for by policy-makers and regulators.
AI adoption not novel
AI has been used in at least rudimentary forms within finance as early as the 1980s. In the last 20 years, the development of high-frequency trading – when algorithms rapidly trade to capture micro gains emerging from small price fluctuations that occur within fractions of a second – has been particularly notable. Central banks appeared reluctant to embrace any complex machine learning methods. This is now changing.
The shift toward formalisation and quantitative models within economics has brought forth the dominance of statistics, maths and computer science. As AI is borne from and comprises these elements, the development of AI use cases within central banks was intuitive. OMFIF discussed this shift in a recent roundtable with David Hardoon, group chief data and AI officer, UnionBank of the Philippines.
According to Hardoon, central banks around the world are beginning to use AI to support their macroeconomic projections and forecasting. This is a use case that several central banks discussed in a Bank for International Settlements paper on AI central banking development. It includes, as Hardoon outlines, ‘running large complex scenarios and outcomes’ and integrating more complexity into projections of risk, growth and market elasticity, for example.
How central banks across the world are implementing AI systems
For one, the Bank of Indonesia is using ML to incorporate the impact of foreign investor behaviour on exchange rates and monetary policy decisions. Similarly, through AI-driven market forecasting and data deployment, Banque de France’s BIZMAP project used AI to support French small- and medium-sized enterprises accessing global markets.
Yet AI usage for macroeconomic projections remains cautious. Central banks are wisely reluctant to hand over substantive control to highly powerful algorithms and systems.
However, one domain where AI plays a more active role in central bank functions is in identifying anomalies and outliers in data. This can take at least two forms. First, AI can be used to help central bank economic research divisions correct mistakes of human error. This supports more streamlined and accurate data, that is less susceptible to noise and bias.
Second, AI anomaly detection could be used to identify otherwise hidden signals of market risk, whether they are signals of volatility, panic within financial markets or potential valuation gaps. Consider, for instance, Banco de España’s work alongside the Knowledge Engineering Institute (IIC) to develop an ML tool that detects outliers in non-financial firms’ accounting statements.
The Bank of Canada, meanwhile, uses AI to detect financial institutions’ data to improve efficiency and quality. This is a welcome development, given the sensitivity of economic policy to such figures. The Bank of England and the European Central Bank also use AI to monitor data quality for signals of unexpected economic shocks. More broadly, the Bank of Israel uses ML to check all daily transactions in derivatives markets, with the Deutsche Bundesbank similarly using an unsupervised ML system to detect outliers in all major financial data sets.
Perhaps the most radical current use case of AI by central banks is its application to ‘alternative’ or ‘unstructured’ data. As Hardoon commented, this might involve checking social media feeds as a gauge for consumer confidence and public sentiment. A powerful AI system programmed with robust parameters could read and transmute text into data usable as part of a more holistic picture of the economy.
This is similarly not an abstract point. Banque de France harnessed non-traditional indicators from social media networks to estimate inflation perceptions. The Central Bank of Malaysia, meanwhile, has applied AI to newspapers (about 750,000 articles) in order to improve its forecasting accuracy of gross domestic product growth and demand side metrics.
Central bank usage of AI comes with risks
While these use cases are promising and may improve central bank operating models, they harbour issues about the quality and protection of data, the assumptions guiding AI systems and the possible convergence of AI systems. All of this may pose a threat to financial markets, as organisations like the Securities and Exchange Commission have argued.
The challenges associated with data biases are well documented. Minority demographics, more remote regions and lower-income households are often underrepresented in population data sets. And the data that is available is often of poorer quality. Economists – including those at central banks – can confront this by running robustness checks and testing for biases. Yet not all researchers do this well, resulting in data that can be misleading.
These data-related issues can provide context to the emerging academic crisis around data fraud that has been gripping social science. Such challenges are particularly pronounced with AI since faulty thinking can be buried deep within models that are hard to understand, even for the researchers developing them. A risk for AI’s applications to central banks is that poor ideas, models and data go unchecked because they are shrouded under impressive layers of algorithmic complexity.
Interestingly, while it can be hard to identify errors in AI thinking without rigorously checking the underlying data, it may be easier than ever to misuse the sensitive data that may support AI training or applications.
Considerations needed for data privacy
Applying central bank AI to unstructured data presents several questions that will perturb advocates of tight privacy controls. Should social media posts be subject to government oversight for the purposes of macroeconomic projections? Should private individual-level information be held, in any capacity, by a public financial institution? And if so, what steps should central banks implement to sufficiently anonymise and protect users from relevant breaches of their right to data privacy?
Finally, there are the risks associated with algorithm convergence. If central banks develop models that rely on similar assumptions about the economy, central bank reserve management could potentially converge in asset allocation deliberations. This would theoretically lead to less short-term volatility; central bank reserves would have a greater tendency to move together.
It might also result in more violent swings periodically, with central banks fleeing to safety or purchasing assets in congruence. The effects of this may be compounded further if such ‘groupthink’ convergence has second-order effects on an AI system. It could show that asset managers should double down when there is convergent movement, further increasing the violence of market swings.
The consequences of this were evident previously in the private sector, notably during the 2010 flash crash. Yet convergence remains a largely theoretical point. AI would first need to demonstrate it is much better at reserve management than humans. Yet until technology advances further, such an outcome will remain the stuff of science fiction.
AI usage by central banks more broadly, however, is not science fiction. The global push for AI governance and regulatory provisions has been slow-moving and increasingly co-opted by self-interested technology innovators. It has so far failed to adequately consider the guardrails that ought to guide central bank AI deployment. To date, AI central banking remains a gap in existing academic literature, research and policy. Given the sensitivity of central bank data as well as ensuring central bank usage of AI is safe and transparent, it is increasingly urgent that economists and policy-makers fill this gap.
Julian Jacobs is Senior Economist at OMFIF.