< Explain other AI papers

RSQ: Learning from Important Tokens Leads to Better Quantized LLMs

Yi-Lin Sung, Prateek Yadav, Jialu Li, Jaehong Yoon, Mohit Bansal

2025-03-04

RSQ: Learning from Important Tokens Leads to Better Quantized LLMs

Summary

This paper talks about RSQ (Rotate, Scale, then Quantize), a new method for making large AI language models smaller and more efficient without losing their ability to understand and generate text.

What's the problem?

Current ways of making AI models smaller treat all parts of the text equally when compressing the model. This isn't ideal because some parts of the text are more important than others for the AI to understand.

What's the solution?

The researchers created RSQ, which does three main things: it reorganizes the model's internal structure, gives more importance to crucial parts of the text, and then compresses the model. They found that paying attention to which words the AI focuses on most helps make the compressed model work better. They tested RSQ on different types and sizes of AI models and found it worked consistently well.

Why it matters?

This matters because it helps make powerful AI language models smaller and faster without losing their abilities. Smaller models can run on more devices and use less energy, making AI technology more accessible and environmentally friendly. It also shows that understanding how AI pays attention to text can lead to better ways of improving these models.

Abstract

Layer-wise quantization is a key technique for efficiently compressing large models without expensive retraining. Previous methods typically quantize the weights of each layer by "uniformly" optimizing the layer reconstruction loss across all output tokens. However, in this paper, we demonstrate that better-quantized models can be obtained by prioritizing learning from important tokens (e.g. which have large attention scores). Building on this finding, we propose RSQ (Rotate, Scale, then Quantize), which (1) applies rotations (orthogonal transformation) to the model to mitigate outliers (those with exceptionally large magnitude), (2) scales the token feature based on its importance, and (3) quantizes the model using the GPTQ framework with the second-order statistics computed by scaled tokens. To compute token importance, we explore both heuristic and dynamic strategies. Based on a thorough analysis of all approaches, we adopt attention concentration, which uses attention scores of each token as its importance, as the best approach. We demonstrate that RSQ consistently outperforms baseline methods across multiple downstream tasks and three model families: LLaMA3, Mistral, and Qwen2.5. Additionally, models quantized with RSQ achieve superior performance on long-context tasks, further highlighting its effectiveness. Lastly, RSQ demonstrates generalizability across various setups, including different model sizes, calibration datasets, bit precisions, and quantization methods.