Optimal Brain Restoration for Joint Quantization and Sparsification of LLMs
Hang Guo, Yawei Li, Luca Benini
2025-09-17
Summary
This paper explores a way to make large language models, the kind powering many AI applications, smaller and faster without losing too much accuracy. It focuses on combining two existing techniques – quantization and pruning – to achieve better compression than either could alone.
What's the problem?
Currently, researchers are trying to shrink these massive AI models using methods like quantization (reducing the precision of numbers used) and pruning (removing unimportant connections). However, these methods are hitting their limits, and using just one isn't enough for further improvement. The core issue is that quantization wants all the important numbers to be close together, while pruning actually *needs* a wide range of values to identify what's important to keep. These goals clash, making it hard to use both effectively.
What's the solution?
The researchers developed a technique called Optimal Brain Restoration, or OBR. It's a way to balance quantization and pruning by essentially correcting for the errors each introduces. OBR uses a mathematical approach based on something called the 'Hessian' (which relates to how changes in the model affect its performance) to figure out how to adjust the remaining connections after pruning and quantization to minimize any loss in accuracy. The clever part is they found a direct formula to do this correction without needing to retrain the entire model, making it efficient.
Why it matters?
This work is important because it allows for significantly more compression of large language models. The experiments showed they could make the models much smaller and faster – up to 6.4 times less memory usage and 4.72 times faster – while maintaining good performance. This means AI applications could become more accessible and run on devices with less powerful hardware, like phones or laptops, and also reduce the energy consumption of large AI systems.
Abstract
Recent advances in Large Language Model (LLM) compression, such as quantization and pruning, have achieved notable success. However, as these techniques gradually approach their respective limits, relying on a single method for further compression has become increasingly challenging. In this work, we explore an alternative solution by combining quantization and sparsity. This joint approach, though promising, introduces new difficulties due to the inherently conflicting requirements on weight distributions: quantization favors compact ranges, while pruning benefits from high variance. To attack this problem, we propose Optimal Brain Restoration (OBR), a general and training-free framework that aligns pruning and quantization by error compensation between both. OBR minimizes performance degradation on downstream tasks by building on a second-order Hessian objective, which is then reformulated into a tractable problem through surrogate approximation and ultimately reaches a closed-form solution via group error compensation. Experiments show that OBR enables aggressive W4A4KV4 quantization with 50% sparsity on existing LLMs, and delivers up to 4.72x speedup and 6.4x memory reduction compared to the FP16-dense baseline.