European Union acts on artificial intelligence safety

Does the legislation go far enough?

On 13 March, 2024, the European Union passed a landmark piece of legislation and the most significant attempt to regulate artificial intelligence to date. It has become a major wedge in the AI policy community. On one side, onlookers heralded the act as the first piece of real AI governance that could serve as a foundation for future safety commitments globally. On the other side, sceptics argue that it imposes onerous restrictions that would stifle AI innovation.

Regulators face an extraordinarily difficult task in making sense of AI and providing sufficient parameters for it. Though the act is clearly a landmark in AI rulemaking, it will do little to prevent the most significant risks emerging from the flurry of global AI competition.

From as early as the 1960s there have been calls for the regulation of AI. In many cases, these calls have preceded the development of the technology itself, caused in part by the moral and literary significance of ‘machine intelligence’ as a concept. The EU AI Act is the first major attempt to legislate acceptable use cases and parameters for AI deployment.

The act distinguishes AI systems by categories of risk: unacceptable, high, limited and minimal. Unacceptable risks are systems that are banned outright because they present a clear threat to people’s safety and livelihoods. Such banned applications include those related to social scoring or emotion detection. High-risk systems are those that have the potential to cause harm through access to sensitive data. These include applications in justice systems, law enforcement, the public system and critical infrastructure. Within finance, AI applications used in determinations about creditworthiness and insurance claims, for example, would be restricted.

Consistency or innovation?

There is little doubt that the EU AI Act is a pioneering legislative framework aimed at creating guardrails for safe AI use cases. The tiered system of risk is useful, and it will create consistency in AI rules across a very large jurisdiction. Injunctions for model training, privacy and intellectual property ownership compensation mean unlicenced content would not be acceptable for model training. Users must meanwhile be informed of how their personal data is used in the training process.

Some have argued that these measures would be too restrictive for large language models, claiming that the EU, which seriously lags the UK, US and China in AI development, is smothering an industry that is just beginning to emerge. Yet the EU AI Act was an articulation of many of the hopes for safer AI development that had been put forth by policy researchers.

However, there are legitimate concerns about whether the new legislation has kept pace with technology. For instance, the act requires a ‘sufficiently detailed summary of the content’ used to train general-purpose AI systems as well as protections for trade secrets. Yet many AI companies are loathe to reveal information about how they train their models, and they may look to circumvent these rules by pursuing development in less restrictive legislative contexts.

This might create a race-to-the-bottom dynamic, whereby countries looking to win the AI race are strongly incentivised to pursue low levels of restrictions. This is already apparent in the approaches the US and UK have taken. The former views AI as vital to continued international military and economic hegemony. The latter views it as central to resuscitating its weakened post-Brexit economy and global reputation.

These dynamics point to perhaps the greatest weakness of the EU AI Act. Though it provides an initial foundation for global AI policy, it will struggle against the overwhelming fervour of the AI race. The winner-takes-all nature of AI might mean that governments and leading AI companies will continue to experience strong incentives for rapid innovation, even at the expense of safety. The danger is therefore that the EU AI Act and other national stipulations – such as those emerging from the UK AI Safety Institute or the US AI executive order – create a false sense of security.

Just as domestic rules for nuclear weapon development – monitoring requirements, security and care – are important, global incentive structures and co-operation have been crucial to avoiding weapon usage. Similarly, though legislative commitments for AI are a good step, international coordination between leading countries, which creates incentives for global AI safety, remains vital.

The EU AI Act provides a first step in global governance, but the key test will come from how the leading economies view their commitments to global and domestic security.

Julian Jacobs is Senior Economist, Digital Monetary Institute, OMFIF and Research Lead, Future Impact Group, University of Oxford. AJ Mannan is a Research Assistant at the Future Impact Group, University of Oxford.

Join Today

Connect with our membership team

Scroll to Top