New method significantly reduces AI energy consumption

The SuperMUC-NG at the Leibniz Supercomputing Centre is the eighth fastest computer in the world. Credit: Veronika Hohenegger, LRZ

AI applications such as large language models (LLMs) have become an integral part of our everyday lives. The required computing, storage and transmission capacities are provided by data centers that consume vast amounts of energy. In Germany alone, this amounted to about 16 billion kWh in 2020, or around 1% of the country’s total energy consumption. For 2025, this figure is expected to increase to 22 billion kWh.

The arrival of more complex AI applications in the coming years will substantially increase the demands on data center capacity. These applications will use up huge amounts of energy for the training of neural networks. To counteract this trend, researchers at the Technical University of Munich (TUM) have developed a training method that is 100 times faster while achieving accuracy comparable to existing procedures. This will significantly reduce the energy consumption for training.

They presented their research at the Neural Information Processing Systems conference (NeurIPS 2024), held in Vancouver Dec. 10–15.

The functioning of neural networks, which are used in AI for such tasks as image recognition or language processing, is inspired by the way the human brain works. These networks consist of interconnected nodes called artificial neurons. The input signals are weighted with certain parameters and then summed up. If a defined threshold is exceeded, the signal is passed on to the next node.

To train the network, the initial selection of parameter values is usually randomized, for example, using a normal distribution. The values are then incrementally adjusted to gradually improve the network predictions. Because of the many iterations required, this training is extremely demanding and consumes a lot of electricity.

Parameters selected according to probabilities

Felix Dietrich, a professor of Physics-enhanced Machine Learning, and his team have developed a new method. Instead of iteratively determining the parameters between the nodes, their approach uses probabilities. Their probabilistic method is based on the targeted use of values at critical locations in the training data where large and rapid changes in values are taking place.

The objective of the current study is to use this approach to acquire energy-conserving dynamic systems from the data. Such systems change over the course of time in accordance with certain rules and are found in climate models and in financial markets, for example.

“Our method makes it possible to determine the required parameters with minimal computing power. This can make the training of neural networks much faster and, as a result, more energy efficient,” says Dietrich. “In addition, we have seen that the accuracy of the new method is comparable to that of iteratively trained networks.”

Provided by
Technical University Munich


Citation:
New method significantly reduces AI energy consumption (2025, March 6)
retrieved 6 March 2025
from https://techxplore.com/news/2025-03-method-significantly-ai-energy-consumption.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.