Sustainability considerations in the AI era
Opinion is divided on artificial intelligence as it helps improve financial tools but hinders net-zero plans, writes Nneka Chike-Obi, senior director and head of environmental, social and governance ratings and research at Sustainable Fitch.
Artificial intelligence technology has great potential to support sustainable investing. Its ability to aggregate and analyse complex datasets related to environmental, social or governance metrics is a key advantage. But there are a growing number of AI-specific ESG impacts that investors should consider when evaluating sustainability performance.
Climate change and natural capital
Training AI models requires substantial amounts of computing power, which translates into energy consumption and carbon dioxide emissions. A 2019 paper estimated the carbon footprint of training one large language model to be five times greater than the lifetime emissions of the average American car.
The state of California passed an emissions disclosure law in September 2023, requiring companies with revenues of $1bn or more to report on all three scopes. This legislation will apply to all companies operating in the state, not only those headquartered there such as OpenAI, whose generative pre-trained transformer system, ChatGPT, is one of the most widely used LLMs in the world. Increasing transparency on the technology’s environmental impact could encourage tech sector financing activity for renewable energy projects and low-carbon data centres, thereby reducing scope 2 emissions from purchased electricity.
The trend towards a greater focus on nature, brought on by the work of the Taskforce on Nature-Related Financial Disclosures, will also affect AI technology providers due to the reliance of data centres on water and land. Water usage for data centre cooling is upwards of 1m gallons of water (3.8m litres) per day. Research from Virginia Tech in 2021 estimated that one-fifth of US-based data centres are operating in areas experiencing moderate to high water stress. Local community resistance to new data centre projects over environmental concerns has become more common, occurring in Ireland, the Netherlands, the UK and the US in recent years.
|Material ESG factors for artificial intelligence|
|Energy efficiency||Environmental||Data centre power usage effectiveness|
|Greenhouse gas emissions||Environmental||Scope 2 emissions from data centre electricity consumption|
|Water intensity||Environmental||Water consumption for server cooling|
|Access, affordability and living standards||Social||Algorithmic screening for consumer finance applications|
|Community engagement||Social||ESG, economic, cultural or other issues affecting social licence to operate|
|Consumer protection||Social||Consumer (lack of) awareness of the use of AI tools in commercial transactions|
|Discrimination and equal pay||Social||Algorithmic screening for employment applications|
|Privacy and data protection||Social||Aggregation of personal information using AI tools|
|Working conditions||Social||Replacement of human workers with AI|
|Source: Sustainable Fitch|
Consumer rights, access to services and data privacy
AI tools are increasingly being used in financial services to screen potential borrowers, but it is unclear whether they overcome or uphold unconscious bias. A 2019 University of California, Berkeley study compared US-based traditional and algorithmic mortgage lenders to assess discriminatory effects and found that the use of algorithms reduced discrimination for African-American and Latino borrowers by 40%. A few years later, a 2021 investigation of Apple’s Apple Card by the New York State Department of Financial Services regarding female credit card applicants concluded that the algorithm ‘may reflect historical bias or “legacy bias”… Thus, the data used by creditors in developing and testing a model can perpetuate unintended biased outcomes.’
There are various data privacy risks associated with the increased use of alternative credit data for consumers and small- and medium-sized enterprises. These are data derived from a wider range of sources compared to traditional data used for determining a credit score or borrowers’ suitability, such as history with speciality lenders (payday lenders, for example), utilities and rental payment history, peer-to-peer lending history, social media, customer reviews and mobile app usage.
Of particular concern is that such data can be more easily used to identify the individual, creating a heightened risk of identity theft or fraud should an unauthorised party gain access. This risk increases with the amount of data that is collected; more data points make it easier to identify the person. In a survey of 168 Hong Kong-registered banks, the top barrier to AI adoption cited was ‘customers are concerned about the data privacy or other security issues’.
Alternative data sourced from consumer financial transactions are widely used in the investment industry for market intelligence. In September 2021, the US Securities and Exchange Commission fined an AI-based alternative data provider $10m and its chief executive officer $300,000 for failing to aggregate and anonymise data, and for disclosing data to third parties without consent. The judgment did not address the potential harms to consumers associated with the dissemination of their personal information to alternative data providers and their end users.
Workers’ rights and working conditions
Labour unions are also starting to address AI-related concerns about working conditions, with employees in industries ranging from manufacturing to media raising grievances. The European Commission has identified the gig economy as an area where AI-based worker surveillance is becoming more common, leading to worries about privacy and pushing employees towards overwork when the data are used to calculate performance and pay. The European Parliament and the US Senate are considering legislative proposals specifically addressing AI.
In 2023, the two largest unions representing American film workers – the Writers Guild of America and the Screen Actors Guild-American Federation of Television and Radio Artists – went on strike, citing fears that written and performed work could be replaced by AI-generated content. The widespread potential uses of AI across sectors may contribute to increased worker solidarity and possibly more work stoppages. The United Auto Workers union expressed public support for the WGA and SAG-AFTRA strike just two months before commencing its own work stoppage in September 2023.