MixLLM: LLM Quantization with Global Mixed-precision between Output-features and Highly-efficient System Design
Zhen Zheng, Xiaonan Song, Chuanjie Liu
2024-12-23

Summary
This paper talks about MixLLM, a new method for making large language models (LLMs) smaller and faster by using a technique called quantization, which reduces the amount of memory they use without greatly affecting their accuracy.
What's the problem?
Current methods for quantizing LLMs often lead to a drop in accuracy or are inefficient in how they use system resources. This means that while the models become smaller, they might not perform as well, or they take too long to process information.
What's the solution?
MixLLM improves on existing quantization techniques by using mixed-precision quantization. This means that it assigns different levels of detail (bit-widths) to different parts of the model based on their importance. For example, more important features get more bits for better accuracy, while less important ones get fewer bits to save memory. The method also includes a two-step process for converting data types quickly, which helps speed up the model's performance without losing quality.
Why it matters?
This research is important because it allows LLMs to be more efficient and effective, making them easier to use in real-world applications like chatbots and virtual assistants. By optimizing how these models are compressed, MixLLM can help improve their performance while reducing the resources needed to run them.
Abstract
Quantization has become one of the most effective methodologies to compress LLMs into smaller size. However, the existing quantization solutions still show limitations of either non-negligible accuracy drop or system inefficiency. In this paper, we make a comprehensive analysis of the general quantization principles on their effect to the triangle of accuracy, memory consumption and system efficiency. We propose MixLLM that explores the new optimization space of mixed-precision quantization between output features based on the insight that different output features matter differently in the model. MixLLM identifies the output features with high salience in the global view rather than within each single layer, effectively assigning the larger bit-width to output features that need it most to achieve good accuracy with low memory consumption. We present the sweet spot of quantization configuration of algorithm-system co-design that leads to high accuracy and system efficiency. To address the system challenge, we design the two-step dequantization to make use of the int8 Tensor Core easily and fast data type conversion to reduce dequantization overhead significantly, and present the software pipeline to overlap the memory access, dequantization and the MatMul to the best. Extensive experiments show that with only 10% more bits, the PPL increasement can be reduced from about 0.5 in SOTA to within 0.2 for Llama 3.1 70B, while on average MMLU-Pro improves by 0.93 over the SOTA of three popular models. In addition to its superior accuracy, MixLLM also achieves state-of-the-art system efficiency.