Comprehensive social safety nets required to combat AI upheaval

Data dignity and universal adjustment assistance among attractive solutions

Automation, powered by artificial intelligence, may be starting to cause job losses. The Wall Street Journal reports that the top and tail end of the labour market are now both under threat with generative AI soon to also upend a much bigger share of white-collar jobs. Already, companies have attributed ‘more than 4,600 job cuts to AI’ since May 2023, particularly in the media and technology sector. This follows analysis by the International Monetary Fund, which claims that AI is set to affect nearly 40% of all jobs.

Job displacement by way of technological innovation has been a concern for decades. Through industrial robots and computerisation, studies have shown how automation can directly impact labour market dynamics. As these dynamics surface with AI, are policy-makers prepared to accommodate a seriously retooled model of workforce adjustment?

AI is bound to change the employment landscape – both to workers’ benefit and detriment. It is therefore crucial for countries to establish comprehensive social safety nets and offer retraining programmes for vulnerable workers. How these social safety nets will function remains to be seen but they will most likely validate long-held concerns of wage inequality.

The technology is also certain to produce a great deal of value. Redistributing some of it might go some way to mitigating its effect on job displacement, but policy-makers and those on the cutting edge of AI development must collaborate to come up with a fair rationale for this process.

Data dignity and compensation

Creating AI models requires access to enormous datasets on which the models can be trained. These are often gathered by web scraping – using software to create a corpus of data from publicly available online sources.

These datasets often contain both personal data and intellectual property. The data dignity model, first theorised in 2018 by Jaron Lanier and E. Glen Weyl, would see people compensated when their data are used to train an AI model.

We use a great many online services for free. The economic model of Google’s search engine relies on capturing the data of its users, analysing it and using it to sell targeted advertising and advantageous positions within search results. The compensation we receive for the exploitation of our data is a free search engine.

But does free access to AI services represent a meaningful compensation for the use of personal or trademarked data? What if an artist’s work or an individual’s data is used in training and they never have occasion to make use of the AI model?

Monetary compensation for the inclusion of one’s data in AI training models is an attractive concept. However, there are some sticking points. Such a system might render AI development economically unfeasible and result in anti-innovation policies.

It is also not necessarily a means of mitigating the effects of job displacement since there is no obvious reason why the inclusion of one’s data in a training set should correlate with such displacement. Policy proposals to address this directly must involve a mechanism that provides a strong social safety net to protect workers while also giving them flexibility for retraining.

Making adjustments

Some forms of universal basic income can make for attractive solutions to combatting AI-induced job losses. However, the Institute for Policy Research’s Luke Martinelli’s summarised the difficulty of implementing such a solution when he wrote that ‘an affordable UBI is inadequate, and an adequate UBI is unaffordable’. To enact an AI UBI policy, other programmes may need to be gutted to fund it.

The blanket approach of some UBI solutions means they do not deal with socioeconomic imbalances on an individual level, caused by differing levels of material and institutional access as well as social mobility. This was a limitation of 2020 US presidential candidate Andrew Yang’s proposal of a ‘freedom dividend’, a UBI of $1,000 a month for every American adult over the age of 18. It was viewed by some as providing a stipend to people who might not need it. This flattening approach is also unlikely to improve overall participation. So what will?

In the US, policies such as trade adjustment assistance are in place, but nothing exists for the specific challenges of technological change. This is also mostly the case internationally, as there is no singular attention on developing a programme to support the rapid adjustment of workers in the face of automation and AI. One solution could be to expand trade adjustment assistance. Universal adjustment assistance would account for technological change and include considerations for retraining.

Fiscal fantasy?

Whether or not UBI and data dignity are the right mechanisms to keep some baseline standard of living for displaced people, the goal is ultimately to provide robust social safety and protections. Dealing with the problem of AI adjustment will require a range of policy mechanisms. Compensation measures can borrow elements from data dignity models and incorporate universal adjustment assistance, with special attention given to the effects of technological upheaval.

Regardless, these concepts should not be seen as solutions in search of a problem. While the impact of AI has mostly been speculative, prescient findings from the WSJ and IMF show the urgency of developing the support needed to protect workers who may lose their jobs. After all, generative AI is nothing without the data it’s trained on – data that people create.

Janan Jama is Subeditor at OMFIF.

Join Today

Connect with our membership team

Scroll to Top