Matrix multiplication is a key operation in AI, but it consumes a significant amount of energy.
Many projects are working on ways to improve the efficiency of these operations.
However, the algorithm they discovered has a drawback: It needs specialized hardware.
Every time someone uses ChatGPT or Midjourney, they’re contributing to significant energy (and water) consumption. This is one of the problems with the rise of AI, and companies are considering using nuclear power plants to meet that demand. While many are trying to find ways to solve these energy needs, others are going in the opposite direction and trying to make AI consume much less.
AI is on an energy diet. According to the tech outlet Decrypt, researchers at BitEnergy AI recently developed a new technique to reduce the energy consumption of AI models. They say their method decreases energy use by up to 95% without compromising the quality of these systems.
Multiplying better. The key lies in the way floating-point numbers are multiplied, a computationally-intensive AI operation. Instead of multiplying, the researchers use integer additions, which, according to their analysis, drastically reduces the energy requirements of the operation.
Numbers with a decimal point. The floating point is a mathematical concept that allows computers to handle huge and very small numbers efficiently with a simple operation: the placement of the decimal point. The greater the number of bits, or “width,” the greater the precision you can obtain in calculations. However, this increases the energy (or the memory needs of the system) required.
For example, FP32, which is used in deep learning algorithms, offers higher precision than FP8, which is used in training and inference, where less precision is required.
L-Mul. The algorithm developed by the BitEnergy AI researchers changes multiplications into integer additions. It decomposes these multiplications using additions, which speeds up calculations and reduces energy consumption without affecting the accuracy of the result.
It requires special hardware. However, this method has a drawback: It requires a specific type of hardware, and current systems aren’t optimized to achieve the mentioned reduction in energy consumption. Even so, researchers claim that their algorithms are being implemented with hardware chips, so a similar option is likely to be available in the future.
A promising proposal. Researchers claim that this technique enables AI systems to “potentially reduce 95% energy cost by element wise floating point tensor multiplications and 80% energy cost of dot products.” Tensors are multidimensional matrices of numbers that represent data in neural networks.
Matrix multiplications can be challenging. Improving matrix multiplication has become a unique mathematical challenge for various entities and companies. DeepMind revealed its own system to enhance matrix multiplication at the end of 2022, and just a year later, a team from the Polytechnic University of Valencia in Spain also presented an equally promising alternative.
Image | Markus Spiske
See all comments on https://www.xatakaen.com
SEE 0 Comment