What AI means for ESG


This article is an on-site version of our Moral Money newsletter. Sign up here to get the newsletter sent straight to your inbox.

Visit our Moral Money hub for all the latest ESG news, opinion and analysis from around the FT

A year ago this month, OpenAI unveiled ChatGPT to the world, igniting huge excitement — and concern — about the accelerating progress of artificial intelligence. Last week, the UK and US both took high-profile steps aimed at fostering AI development while limiting its negative consequences. These two tracks have emerging implications for environmental, social and governance investing as I investigated here today.

Before we get to that, I would like to remind you of our first Moral Money Summit Africa on November 21 in Johannesburg. We’ll be hearing from South African public enterprises minister Pravin Gordhan, Nedbank chair Daniel Mminele, Ghanaian climate activist Joshua Amponsem and many other leading figures in sustainable business and investment on the continent. Moral Money newsletter subscribers can save 20 per cent off an in-person pass or register for the digital pass for free. — Patrick Temple-West

ESG investors grapple with the rise of AI

It was a lovefest.

For nearly an hour last week, UK prime minister Rishi Sunak sat alongside Tesla chief executive Elon Musk to talk about artificial intelligence. The duo joked, laughed and took questions from the audience. 

But all joking aside, the conversation had a darker side too. Musk told Sunak that there “will come a point where no job is needed” and called AI the “most disruptive force in history”.

Sunak’s AI summit at Bletchley Park reflected how this subject is surging up the political agenda — and it has important implications for the world of environmental, social and governance (ESG) investing, too.

On one hand, AI has helped companies more accurately predict extreme weather. Moody’s RMS, for example, told me it is testing AI tools that insurance companies could use to assess climate risks.

But on the other hand — as Musk foreshadowed — AI is likely to take jobs away from workers worldwide. Big technology companies in Silicon Valley are already starting to feel heat from ESG-focused shareholders concerned about job losses due to AI.

Not to be left out, the US also advanced AI policy last week. President Joe Biden issued an executive order which included expanding grants for AI research in areas such as climate change. The order called for the government to collaborate with businesses and universities to foster AI tools “to mitigate climate change risks”. The US also said it would set up an institute to police AI.

On both sides of the Atlantic, AI businesses are wary of government regulations. In a move that appears aimed at avoiding onerous red tape, big AI companies agreed to allow governments in the UK and US to test their latest models for national security and other risks before they are released to businesses and consumers.

For now, businesses selling clean tech tools expressed optimism that they will be better positioned to navigate government oversight of AI. In clean tech, governments appear willing to stimulate, rather than stifle, AI projects.

“The executive order has a cautious tone when it comes to AI applied to a lot of sectors, but a very promising tone when it comes to investing in research and development for climate change,” Himanshu Gupta, chief executive of start-up ClimateAI, told me. His firm has worked with agriculture companies to develop more drought resistant crops, among other things.

Kamala Harris speaking in front of a banner reading: “ARTICIAL INTELLIGENCE: IN SERVICE OF THE PUBLIC INTEREST”
US vice-president Kamala Harris delivered a speech on artificial intelligence last week at the US embassy in London © AP

“So far, a lot of funding that went into climate came from the venture [capital] world. Those markets also have expectations of higher returns,” Gupta said. But government grants for AI climate funding could ease some of the funding burden for start-ups, allowing them to “keep on working on the next big solution for climate change”, he said.

Ambarish Mitra, co-founder of Greyparrot, a London-based business that uses AI to help waste disposal companies more accurately identify recyclable rubbish, agreed that “there may be concerns about government interference potentially complicating AI innovation.” 

“Balancing innovation with regulatory oversight is crucial for long-term success,” he said.

As governments ratchet up pressure on AI, ESG investors are doing so too. On October 19, the AFL-CIO, a coalition of US labour unions, said it is filing shareholder proposals at companies to get more information about how they are protecting workers from the impact of AI. The five companies designated for shareholder proposals in the months ahead are businesses with entertainment divisions: Apple, Comcast, Disney, Netflix and Warner Brothers Discovery.

This year’s Hollywood strikes have highlighted a need for AI protections, the AFL-CIO said.

“The AI dehumanization of the American workforce threatens the very framework of the nation’s economy,” the AFL-CIO said, “while introducing the potential for discrimination in employment decisions”.

Its proposal for Apple shareholders requests a report on the company’s AI use that discloses “any ethical guidelines that the company has adopted regarding the company’s use of AI technology”.

Apple has asked the Securities and Exchange Commission for permission to block the proposal (which is a typical tactic used by companies to ward off all sorts of ESG shareholder petitions). In an October 23 letter to the SEC, Apple said discrimination or bias against employees — or the decision to automate jobs and replace workers — are already longstanding business issues that come up without AI technology. Apple already has procedures in place to deal with these issues, the company said.

A similar shareholder proposal was filed this year at Google’s parent company Alphabet about how the company uses algorithms. BlackRock and Vanguard — the world’s two largest asset managers — did not vote for this proposal, posing questions about how big asset managers will react to such initiatives.

Asset managers are under pressure from Republican politicians to distance themselves from anything that has to do with ESG. The political pressure has so far focused mainly on climate-related issues, but these AI petitions might prove too hot to support.

Whatever happens, companies need to develop internal AI governance strategies now that government regulators are watching, increasing their exposure to new legal liabilities and costly courtroom fights.

“AI and AI-enabled technologies will likely be subject to new federal standards and requirements to promote safety, security and equity,” Paul Stimers, a partner on the AI policy team at law firm Holland & Knight, told me. (Patrick Temple-West)

Smart read

Shell’s chief executive Wael Sawan told the FT about his plans to make the company “leaner” and more selective about how it invests in the energy transition. Since he took the top job in January, Sawan has outlined plans to boost returns by maintaining oil output, expanding the gas business and trimming less profitable parts of the company’s low-carbon portfolio established under his predecessor Ben van Beurden.

Recommended newsletters for you

FT Asset Management — The inside story on the movers and shakers behind a multitrillion-dollar industry. Sign up here

Energy Source — Essential energy news, analysis and insider intelligence. Sign up here


Leave a Reply

Your email address will not be published. Required fields are marked *