A survey of 23 machine learning (ML) practitioners across the U.K. and other parts of the world, conducted by King’s College London, highlighted feelings of alienation from the sustainability of models, suggesting environmental credentials were not viewed as part of an AI’s performance.
Respondents responded with comments such as “As an individual, I suppose you can’t really do much” despite a growing number of tools to track the environmental impact of AI, suggesting more needs to be done to empower developers to create greener models.
Dr. Georgia Panagiotidou, paper author and Lecturer in Visualization, said, “We highlight a fundamental lack of agency at the heart of the AI and sustainability conversation. Despite tracking tools and information about the effect of ML on the environment, many developers feel like their industry doesn’t value sustainability and what they do won’t matter.
“This work isn’t about bashing individuals but working with them to identify blockers to achieving change in the space. By integrating sustainable thinking into all of AI practice, we can help address the lack of knowledge practitioners feel and give them the tools to make sustainable decisions in the face of the climate crisis.”
ML, a sub-set of AI, has seen significant roll-out in recent years as AI tools play a larger part in the world economy. This has come at significant environmental cost. Global greenhouse emissions from the ICT industry have doubled in the past decade, and the resource needs of data centers training ML models have delayed the retirement of coal power plants.
To deal with concerns over scope-three emissions of ICT companies training AI, tools like CodeCarbon have been developed to estimate the emissions produced when executing code or training AI, giving developers a window into the sustainability of their models.
While research analyzing the information given by smart meters influences how consumers make more sustainable decisions has been done, little work has been conducted on how eco-feedback tools impact the decisions of ML practitioners—until now.
Participants in the paper, presented at the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT 2025), highlighted technical, individual, regulatory and cultural approaches to cutting carbon emissions, such as using eco-feedback tools and requiring workplaces to rationalize energy use in AI training.
Despite this, most felt they had a limited responsibility to deal with environmental concerns, and that the blame lay with large tech providers of Large Language Models like ChatGPT.
Moreover, participants described how their own areas, whether academia or industry, saw sustainability as “not one of our results, it’s not a metric of performance” and was secondary in cultures that prized high model accuracy and speed in producing papers and new products.
A Ph.D. student described sustainability taking a backseat in the competitive environment of publishing research. “I need to do my research and if I was to tell my supervisor no, I’m not going to use the HPC (high performance computer) because I feel bad for the penguins in the Antarctic, then that wouldn’t go down so well.”
Sinem Görücü, a Ph.D. candidate at King’s and first author of the paper, said, “Qualitative interviews were vital to capture how each individual thought about their work, and we were surprised to find people so disempowered. This was a self-selective group of climate-conscious people but who struggled to be climate-active as developers.
“Responsible AI taught us that something at the periphery of the conversation can become deeply embedded in practice; our research highlights that there is work to be done to repeat that success story with sustainability.”
In the future, the team hopes to do a large quantitative study of the environmental sustainability perceptions of machine learning practitioners.
More information:
Sinem Görücü et al, “As an individual, I suppose you can’t really do much”: Environmental Sustainability Perceptions of Machine Learning Practitioners, Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency (2025). DOI: 10.1145/3715275.3732088
Citation:
AI engineers don’t feel empowered to tackle sustainability crisis, new research suggests (2025, July 14)
retrieved 14 July 2025
from https://techxplore.com/news/2025-07-ai-dont-empowered-tackle-sustainability.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.