Autumn 2023

Rand image

Scale, speed and sophistication: addressing AI-driven market manipulation

A robust policy approach to AI is essential to prevent serious harm to society and the economy, argues Bilva Chandra, technology and security policy fellow, RAND Corporation.

Market manipulation by malicious actors on the internet is not a novel concept. Neither is the classic dual-use dilemma for technology, or technology being used for both benevolent and harmful purposes. The generative artificial intelligence boom and future developments in AI will significantly accelerate market manipulation and misuse, emboldening opportunistic actors to influence financial markets through deceptive activity. Generative AI can increase the scale, speed and sophistication of deceptive activity and have negative macroeconomic consequences.

Deceptive activity in the technology space can consist of tactics such as spreading disinformation, scams, astroturfing and a variety of different information operations. There have already been instances where AI has enabled deceptive behaviour that produces either direct or indirect market effects.

The viral AI-generated image of the Pentagon explosion, which was spread by spam accounts on Twitter earlier this year, caused brief but genuine turbulence in the stock market until it was broadly debunked. The US Federal Trade Commission has warned about widespread phone call scams, enabled by AI-generated deepfake audio and voice clones that make it seem as if a family member is calling and asking for help. In July, an Indian politician claimed that scandalous audio clips of him released by an opposing party were audio deepfakes, fueling information uncertainty and the concept of ‘liar’s dividend’. These case studies for AI-enabled deception will continue to grow and shape societies and economies, reducing the agency of individual consumers.

Societal and macroeconomic implications

Though it is challenging to anticipate the second- and third-order effects of emerging and evolving AI technology on social and economic forces, deceptive activity will most likely grow in scale, speed and sophistication to harm consumers and affect consumer decision-making in a variety of ways.

Disinformation and influence operations are deceptive activities that are empowered through AI. Large language models simplify the ability for dangerous actors to create tailor-made influence operations or disinformation campaigns to niche, targeted audiences. The introduction of multimodal models with text and vision capabilities (such as GPTVision) could bolster even more capable, sophisticated disinformation campaigns with both text and paired imagery.

Analysis by the Center for Security and Emerging Technology reveals that LLMs can produce significant cost-cutting measures for malignant influence operations, especially through the use of open-source models with little to no safety or monitoring mechanisms. The rise of AI-generated audio and video deepfakes are further fueling disinformation, increasing its potency particularly in localised settings or the global South, where media literacy and fact-checking mechanisms may be lacking.

By using LLMs to target content generation towards very specific audiences (reducing cultural, language and other identity barriers to content creation), actors engaged in deceptive advertising practices can scale up targeted advertising at lower costs to benefit their own financial interests.

Astroturfing is an attempt to create an impression of a genuine grassroots campaign through deceptive means. LLMs can scale up sophisticated astroturfing efforts, enabling the generation of comments, posts and text at rapid speeds to spread the influence of political messages, advertising campaigns and even flood legislators with AI-generated letters. In a 2023 study, legislators were only marginally less likely to respond to AI-generated content by GPT-3 than to human-written emails, and LLM capabilities have only increased since then.

Deception and disinformation

The benefits from generative AI in increasing scale, speed and sophistication for motivated malicious actors are considerable. Though it is challenging to measure the effects of deceptive content, an increase of this material on the internet can have widespread macroeconomic and societal effects by shaping consumer buying behaviour or eroding trust in the economy and other institutions.

Though Generative AI allows for the streamlined creation of harmful or deceptive narratives, the erosion and manipulation of trust in institutions and the ability for opportunistic actors to target niche populations with effective content.

A key question to highlight is how consumer behaviour and choice will be impacted if AI continues to shape and reduce the authenticity of content on the internet. Some experts predict that 90% of content on the internet will be AI-generated in 2026. But we do not need to wait until then to bear witness to the harm and disarray created by deceptive AI-generated content.

Some policy options to combat this issue in the US include the FTC holding private companies and entities accountable for AI-enabled deceptive activity online by imposing financial costs and fines on them. The Federal Election Commission is considering regulating the use of deepfakes in political campaign ads and it should go deeper to ensure that no campaign financing is permitted to be used for any politically-deceptive activity online. Finally, concrete regulatory standards around transparency and safety for large AI developers and social media platforms are necessary to better understand the harm arising from generative AI and make technology companies accountable for tackling these issues.

The harm stemming from AI-enabled deception should not be underestimated and a robust policy approach to address this issue is essential to protect both consumers and financial markets.

Join Today

Connect with our membership team

Scroll to Top