< Explain other AI papers

PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs

Mengzhao Chen, Yi Liu, Jiahao Wang, Yi Bin, Wenqi Shao, Ping Luo

2024-10-13

PrefixQuant: Static Quantization Beats Dynamic through Prefixed Outliers in LLMs

Summary

This paper introduces PrefixQuant, a new method for quantizing large language models (LLMs) that improves their efficiency and speed by better handling outlier data.

What's the problem?

When using large language models, it's important to make them faster and use less memory. Existing methods for reducing the size of these models often focus on general patterns but overlook specific outlier tokens that can cause problems. This leads to expensive and inefficient methods that slow down the model's performance.

What's the solution?

To address this issue, the authors developed PrefixQuant, which identifies and isolates outlier tokens before the model is used. Instead of retraining the entire model, PrefixQuant finds high-frequency outlier tokens and stores them in a cache, which prevents these problematic tokens from being generated during use. This allows for a more efficient static quantization method that outperforms traditional dynamic quantization techniques. The results showed that models using PrefixQuant achieved better accuracy and were significantly faster than those using older methods.

Why it matters?

This research is important because it provides a new way to optimize large language models, making them more accessible for real-world applications where speed and memory usage are critical. By improving how these models handle data, PrefixQuant could lead to advancements in various fields, including natural language processing and AI-driven applications.

Abstract

Quantization is essential for deploying Large Language Models (LLMs) by enhancing memory efficiency and inference speed. Existing methods for activation quantization mainly address channel-wise outliers, often neglecting token-wise outliers, leading to reliance on costly per-token dynamic quantization. To address this, we introduce PrefixQuant, a novel technique that isolates outlier tokens offline without re-training. Specifically, PrefixQuant identifies high-frequency outlier tokens and prefixes them in the KV cache, preventing the generation of outlier tokens during inference and simplifying quantization. To our knowledge, PrefixQuant is the first to enable efficient per-tensor static quantization to outperform expensive per-token dynamic quantization. For instance, in W4A4KV4 (4- bit weight, 4-bit activation, and 4-bit KV cache) Llama-3-8B, PrefixQuant with per-tensor static quantization achieves a 7.43 WikiText2 perplexity and 71.08% average accuracy on 5 common-sense reasoning tasks, outperforming previous per-token dynamic quantization methods like QuaRot with 0.98 perplexity improvement and +5.98 points accuracy. Additionally, the inference speed of W4A4 quantized models using PrefixQuant is 1.60x to 2.81x faster than FP16 models and exceeds QuaRot models by 1.2x to 1.3x. Our code is available at https://github.com/ChenMnZ/PrefixQuant.