QuEST: Stable Training of LLMs with 1-Bit Weights and Activations
Andrei Panferov, Jiale Chen, Soroush Tabesh, Roberto L. Castro, Mahdi Nikdan, Dan Alistarh
2025-02-10
Summary
This paper talks about QuEST, a new method for training large AI models using very small numbers, like 1-bit values, to make them faster and less memory-intensive while still keeping their accuracy high.
What's the problem?
Large language models (LLMs) take up huge amounts of memory and computing power, making them expensive and hard to use. Compressing them after training helps reduce size but often hurts accuracy, and training them directly with smaller numbers has been unstable in the past.
What's the solution?
QuEST solves this by introducing a way to train AI models with extremely small numbers, like 1-bit values, without losing stability or accuracy. It uses two key techniques: Hadamard normalization to make the model's weights easier to compress and a trust gradient estimator to fix errors caused by compression. These methods allow the AI to learn effectively even with low precision.
Why it matters?
This matters because it makes advanced AI models cheaper and faster to run, allowing them to work on devices with limited resources like phones or laptops. It also reduces the environmental impact of training large models by cutting down on energy usage.
Abstract
One approach to reducing the massive costs of large language models (LLMs) is the use of quantized or sparse representations for training or deployment. While post-training compression methods are very popular, the question of obtaining even more accurate compressed models by directly training over such representations, i.e., Quantization-Aware Training (QAT), is still open: for example, a recent study (arXiv:2411.04330v2) put the "optimal" bit-width at which models can be trained using QAT, while staying accuracy-competitive with standard FP16/BF16 precision, at 8-bits weights and activations. We advance this state-of-the-art via a new method called QuEST, which is Pareto-competitive with FP16, i.e., it provides better accuracy at lower model size, while training models with weights and activations in 4-bits or less. Moreover, QuEST allows stable training with 1-bit weights and activations. QuEST achieves this by improving two key aspects of QAT methods: (1) accurate and fast quantization of the (continuous) distributions of weights and activations via Hadamard normalization and MSE-optimal fitting; (2) a new trust gradient estimator based on the idea of explicitly minimizing the error between the noisy gradient computed over quantized states and the "true" (but unknown) full-precision gradient. Experiments on Llama-type architectures show that QuEST induces stable scaling laws across the entire range of hardware-supported precisions, and can be extended to sparse representations. We provide GPU kernel support showing that models produced by QuEST can be executed efficiently. Our code is available at https://github.com/IST-DASLab/QuEST.