< Explain other AI papers

Exploring the sustainable scaling of AI dilemma: A projective study of corporations' AI environmental impacts

Clément Desroches, Martin Chauvin, Louis Ladan, Caroline Vateau, Simon Gosset, Philippe Cordier

2025-01-30

Exploring the sustainable scaling of AI dilemma: A projective study of corporations' AI environmental impacts

Summary

This paper talks about the growing environmental impact of artificial intelligence (AI), especially large language models, and proposes a way to measure and forecast this impact. The researchers also suggest ways to reduce AI's environmental footprint in the future.

What's the problem?

As AI gets bigger and more complex, it's using a lot more energy and resources. This is bad for the environment, but it's hard for companies to know exactly how much impact their AI is having. The biggest AI companies aren't sharing much information about this, which makes it difficult for others to plan how to reduce their environmental impact.

What's the solution?

The researchers came up with a new way to estimate how much environmental impact a company's AI systems are having. They found that the newest, biggest AI models use up to 4,600 times more energy than older, simpler models. They also predicted how much electricity AI might use by 2030, considering things like how many more people might start using AI and how computers might get more efficient. In their most extreme prediction, they think AI could use 24.4 times more electricity in 2030 than it does now.

Why it matters?

This matters because as AI becomes a bigger part of our lives, we need to make sure it doesn't harm the environment too much. The study shows that just making better computers or using cleaner electricity isn't enough - we need to work on many different things at once to reduce AI's impact. The researchers suggest creating standard ways to measure AI's environmental impact, making companies share more information about their AI's energy use, and creating a new way to measure how environmentally friendly AI is. All of this could help make sure that as AI grows, it doesn't hurt the planet in the process.

Abstract

The rapid growth of artificial intelligence (AI), particularly Large Language Models (LLMs), has raised concerns regarding its global environmental impact that extends beyond greenhouse gas emissions to include consideration of hardware fabrication and end-of-life processes. The opacity from major providers hinders companies' abilities to evaluate their AI-related environmental impacts and achieve net-zero targets. In this paper, we propose a methodology to estimate the environmental impact of a company's AI portfolio, providing actionable insights without necessitating extensive AI and Life-Cycle Assessment (LCA) expertise. Results confirm that large generative AI models consume up to 4600x more energy than traditional models. Our modelling approach, which accounts for increased AI usage, hardware computing efficiency, and changes in electricity mix in line with IPCC scenarios, forecasts AI electricity use up to 2030. Under a high adoption scenario, driven by widespread Generative AI and agents adoption associated to increasingly complex models and frameworks, AI electricity use is projected to rise by a factor of 24.4. Mitigating the environmental impact of Generative AI by 2030 requires coordinated efforts across the AI value chain. Isolated measures in hardware efficiency, model efficiency, or grid improvements alone are insufficient. We advocate for standardized environmental assessment frameworks, greater transparency from the all actors of the value chain and the introduction of a "Return on Environment" metric to align AI development with net-zero goals.