SQ-format: A Unified Sparse-Quantized Hardware-friendly Data Format for LLMs
Ruixuan Huang, Hao Zeng, Hantao Huang, Jinyuan Shi, Minghui Yu, Ian En-Hsu Yen, Shuai Wang
2025-12-08
Summary
This paper introduces a new way to store and process information in large language models (LLMs) called the Sparse-Quantized Format, or SQ-format, aiming to make these models faster and more efficient without losing accuracy.
What's the problem?
Currently, making LLMs smaller and faster through techniques like reducing the precision of numbers (quantization) and removing unnecessary connections (sparsification) is tricky. Lower precision formats don't always give the same performance, and specialized ways of storing sparse data aren't widely used because they can hurt accuracy. It's hard to find a good balance between speed, efficiency, and keeping the model accurate.
What's the solution?
The researchers propose SQ-format, a new standard for organizing the data within LLMs that combines quantization and sparsification in a way that's designed to be easily used by both current and future computer hardware. The idea is to leverage the fact that both sparse and low-precision calculations can be sped up, and combining them strategically leads to better overall performance. This format works especially well when some data points are much larger than others, allowing for effective compression.
Why it matters?
This work is important because it could help make powerful LLMs more accessible and usable on a wider range of devices. By creating a more efficient data format, it paves the way for new hardware designs and improvements to existing AI accelerators, ultimately leading to faster and more affordable AI applications.
Abstract
Post-training quantization (PTQ) plays a crucial role in the democratization of large language models (LLMs). However, existing low-bit quantization and sparsification techniques are difficult to balance accuracy and efficiency due to the limited hardware support. For example, W4A8 can only achieve the same peak TOPS as W8A8 whereas the GPU-supported sparse data format (2:4 semi-structure sparse) is seldomly adopted due to the loss of accuracy. To bridge this gap, in this paper, we propose the Sparse-Quantized Format (SQ-format), which is a unified data format for quantization and sparsification potentially easily supported by new hardware and existing GPUs. SQ-format makes use of the fact that sparse matrix can be accelerated in high-precision, and low-precision matrix multiplication can also be accelerated accordingly. As such, SQ-format is proposed to achieve Pareto improvement between performance and throughput. This format is particularly suitable for activations with outlier inequality status and makes their static compression possible. We show the state-of-the-art PTQ performance with SQ-format, propose the hardware required to support it, and further offer the design exploration and insights for the next-generation AI accelerators.