Outlook 2025: Politics and public opinion will shape AI regulation

Governing AI in an era of rapid transformation

Leading players in the world of artificial intelligence have compared advanced AI to the splitting of the atom: a revolutionary technology with great potential for both good and ill. Now, the defining policy challenge of the age is shaping up to be how to harness AI’s transformative power while mitigating a host of risks – both familiar and novel.

Moving into 2025, policy-makers and regulators must navigate three key factors, all of which are in flux: growing geopolitical competition, technology that is increasingly powerful but remains unpredictable and shifting public sentiment.

Politics: from arms control to arms race

In this decade, AI development has been shaped by a global tug-of-war between acceleration and caution. Early momentum favoured ‘safety-first’ institutions and regulations, exemplified by the non-profit roots of OpenAI – a major AI developer – and the creation of a network of AI safety institutions. A familiar gradient runs from a more laissez-faire US, via the UK in the middle, to a more statist European Union and China.

But, by 2025, governments are worried about growth, and anxious not to be outdone in an economic and strategic arms race. In September 2024, the governor of California, a trend-setting state, blocked a landmark AI safety bill, while bills in the UK, China, Canada and elsewhere have been delayed as lawmakers try to calibrate their approach. Here is what to watch in 2025.

Trump as wildcard in the US

US President Donald Trump’s campaign rhetoric cast regulation as anti-growth. In his first week in office, he has rescinded Biden’s landmark 2023 executive order on AI safety, thereby rolling back federal oversight. He has also announced $500bn of private sector investment in AI infrastructure in an effort to extend the US’ lead in the technology.

Expect more friction with China, maintaining or further tightening of AI chip export controls and a firm emphasis on both economic and national security aspects of AI competition. This will complicate the task of policy-makers seeking to develop international approaches to AI governance, as well as multinational financial firms, which must manage an increasingly complex set of expectations.

What may replace Biden’s regulatory framework is less clear. Trump has spoken little on AI policy specifics, and influential voices in his camp are split on key issues, including whether loosely regulated AI development may present severe, even existential, threats. Trump may create a new centralised body to guide AI development, alongside new federal regulation – or leave rule-making mainly with the states and courts.

‘Course-correcting’ to growth in the UK

In January 2025, the UK’s Labour government unveiled a plan to increase public AI computing power 20-fold by 2030 in a bold effort to ‘shape the AI revolution’. Regulators have been charged with fuelling ‘fast, wide and safe development and adoption of AI’ in their sectors and must report back on how they have fared. If progress lags, it is proposed, a new central regulatory body for AI could be created with a ‘higher risk tolerance’ for experimentation.

As part of the strategy, ‘AI champions’ will be appointed for key industries, including finance, with a new industrial strategy to be published in late spring. Meanwhile, a forthcoming AI bill is expected to focus narrowly on risks from the most powerful models.

For the finance sector, this is unlikely to mean a rollback on safeguards. But expect greater appetite for public-private initiatives, the use of sandboxes and a focus on how finance can support wider economic transformation.

Devil in the details for the EU

The EU AI Act’s phased implementation from 2024-30 continues. In 2025, codes of practice for AI developers will be finalised and member states will designate oversight bodies, with most obligations coming into force in August 2026. The act sets out a risk-based framework for different uses of AI, with a comprehensive set of assessment, monitoring and reporting obligations. In finance, personal credit scoring and insurance pricing are deemed ‘high risk’, attracting the highest level of scrutiny.

Tensions may emerge if European firms chafe under these requirements while the US and UK ease up. With France home to the EU’s largest AI sector, and a rightward shift at Germany’s February elections looking likely, might a partial policy easing be on the horizon?

China aims at global leadership

China has a stated goal of being the world’s AI leader by 2030. In this, it is banking on its huge domestic market, state coordination and research, a large AI sector centred around major tech giants, as well as a drive to lead in integrating AI into industrial processes.

China is accelerating its drive for self-sufficiency in the most advanced AI chips, a significant challenge, and promoting adoption of standards for AI through multilateral bodies. As a result, a bifurcation into US-led and Chinese-led AI ecosystems is likely to intensify, creating a headache for firms with feet in both camps.

Technology: in finance, boring is beautiful

Going into 2025, tech industry buzz has focused on ‘AI agents’ – sophisticated, multimodal AI that can plan and perform tasks autonomously across a wide range of domains. These may be well-suited for general business processes and will ultimately transform many of the industries that finance serves, but they bring unwanted unpredictability to regulated processes. Financial institutions remain cautious.

Instead, within financial institutions, look out for ‘targeted automation’ over general-purpose AI agents, the deployment of smaller-scale models and a focus on combating fraud and cyberattacks.

Financial firms will accelerate AI deployment, with a focus on well-defined internal processes such as operations, knowledge-retrieval and coding, keeping humans firmly ‘in the loop’. Rather than rely entirely on universal, big-box models, firms will turn to specialised ‘mini-models’ fine-tuned on proprietary data for specific tasks. This helps lower operational risk, address privacy concerns and maintain consistency in performance. Extensive use of AI to interact with customers or markets is further off, at least from regulated institutions.

Institutions must also battle a rising wave of fraud and cyberattacks, driven by lower costs and new capabilities unleashed by AI, including ability to accurately mimic voice and video. Firms will hasten to educate customers and re-think security measures, including ‘voice password’ systems.

Public opinion: is pushback overdue?

Global polling on AI finds the public excited and nervous in roughly equal measure, with Asian countries particularly positive. This is perhaps surprising, given that one-third of respondents – and almost one-half of Gen Z – expect AI to replace their jobs within five years. The International Monetary Fund estimated in early 2024 that around 30% of jobs in the West are at risk of replacement by AI. One year on, the chief executive officer of Anthropic, a major AI developer, predicted that by 2027, AI will be ‘broadly better than almost all humans at almost all things’.

If AI-driven labour market disruption happens rapidly, opposition could crystalise fast, even if better jobs are created in the longer term. Concerns may also form around privacy, bias, climate or other issues. Financial institutions may find themselves easy targets of ire, for example, if they are unwilling to lend to people and firms in previously safe-bet sectors. Governments, too, will be carefully observing public sentiment and expectations – expect more rigorous requirements for ‘explainable AI’ and consumer protection measures.

Yet these are exciting times. Another common parallel drawn with AI is the Industrial Revolution, which brought many harms but also unprecedented prosperity. That will imply a wholesale reconfiguring of our economies and societies. There will be a large need for investment, as well as deeper questions, including about the distribution of gains, the role of capital and labour and the role of markets in managing risk. The financial sector, now as then, will have a crucial role to play in ushering in a new chapter for humanity.

Andrew Sutton is a trustee of the London Initiative for Safe AI and a commercial banker by background.

Download the Digital Monetary Institute’s report on the ‘Risks of an intelligent financial system’ here.

Interested in this topic? Subscribe to OMFIF’s newsletter for more.

Join Today

Connect with our membership team

Scroll to Top