Artificial intelligence arms race is heating up

AI is becoming an important weapon for fraudsters

Artificial intelligence is already being heralded as this generation’s transformational technology. As with all leaps in technical capabilities, there are risks that its abuse will mean more fraud and financial crime, and potentially jeopardise financial stability. On the other hand, it will add powerful new tools to regulators’ arsenals.

At an OMFIF roundtable, a panel of experts and regulators broke down some of the key ways in which AI is going to affect the landscape of financial crime and the regulatory toolbox.

New types of identity fraud

Fraud is one of the most important battlegrounds for the AI arms race. The proliferation of tools required to convincingly fake both audio and video of individuals represents a new and uniquely dangerous weapon with which fraudsters can maliciously impersonate counterparties. Generative AI offers the ability to enact targeted, personalised fraud attempts at a hitherto unprecedented scale. Already, there is a rash of finance professionals being hoodwinked via video calls purporting to be from their chief financial officers, many resulting in multi-million dollar frauds.

We are vulnerable to these frauds because we lack effective means of remotely verifying identities. Most ID verification technology – such as passports and driving licences – is designed to be used in person. Remote verification, via photocopies of ID documents, are easy to fake, especially with AI tools.

Governments, law enforcement agencies and employers must move quickly to ensure people are adequately trained to detect these increasingly sophisticated attacks. Call-back protocols are the simplest defence. Individuals contact their supposed counterparty through a different channel of communication to verify their wishes. The system relies on the fact that a fraudster impersonating someone is usually only able to do so through one medium of communication, perhaps a disguised email address or phone number. Contacting them through another medium should enable the real individual to confirm their request.

That kind of protocol is not foolproof. Very sophisticated fraudsters might have control of more than one medium of communication. Even if they do not, confirming through another channel relies on the counterparty being available at the time. Fraudsters often rely on creating a feeling of urgency to force their victims to make rash decisions under pressure.

What’s the solution?

Remote ID verification requires a cryptographic solution. While government-provided digital identities may prove time-consuming and controversial, private companies already have many of the technical prerequisites to offer this kind of system. Phones routinely verify identities with biometrics, passwords and location data and then become a means of verifying identity on other devices via multi-factor authentication. Exchanging this information, perhaps via zero-knowledge proofs, might offer a way to immediately and securely verify the identity of a counterparty.

AI can also be a weapon for supervisors. AI tools are already becoming useful in identifying scammers’ tactics, as well as in the tracing of funds transferred to fraudsters. Machine learning and tools for the analysis of big data are likely to be the key weapons here but, in February, Mastercard announced a generative AI tool that it claims boosts fraud detection rates by an average of 20%.

While financial supervisors will be keen to use the power of AI to summarise documents, identify fraudulent transactions in real-time, read thousands of corporate filings and identify anomalies, these benefits come alongside threats.

Economic and societal risks

The risks and benefits of AI proliferation go beyond fraud detection. At OMFIF’s seminar, the panel discussed the degree to which more effective analysis of new, often unstructured data sources might lower the cost of insuring certain activities.

But the risk here is that AI obtains an outsized importance in dictating economic activity. Data analysis might drive down insurance or borrowing costs for some but, for activities without troves of data from which AI can extract predictive inferences, costs may rise. The panel warned that activities that are not amenable to AI-based analysis might become more expensive to finance or insure. This might mean that lower-income or otherwise financially excluded groups become economically uninsurable. The proliferation of AI in insurance might also cement certain social divisions.

Regulators may wish to curtail excessive market power or distortions enabled by AI, but doing so will prove tremendously difficult. Intervening and requiring companies to insure less safe or profitable concerns might be deemed anti-business and regulating the use of AI in a given jurisdiction might leave domestic players at a competitive disadvantage to their foreign counterparts. Regulating the AI providers themselves might simply chase them to friendlier jurisdictions.

Calls for pauses in development while governance frameworks are developed have met with little traction. The arms race is on. Bad actors will be doing their best to subvert AI and use it for their own ends. All we can do is try to keep up.

Lewis McLellan is Editor, Digital Monetary Institute, OMFIF.

OMFIF’s ‘AI in finance 2024’ seminar is taking place in partnership with the University of Oxford. Register to attend now.

Join Today

Connect with our membership team

Scroll to Top