< Explain other AI papers

Sigma: Differential Rescaling of Query, Key and Value for Efficient Language Models

Zhenghao Lin, Zihao Tang, Xiao Liu, Yeyun Gong, Yi Cheng, Qi Chen, Hang Li, Ying Xin, Ziyue Yang, Kailai Yang, Yu Yan, Xiao Liang, Shuai Lu, Yiming Huang, Zheheng Luo, Lei Qu, Xuan Feng, Yaoxiang Wang, Yuqing Xia, Feiyang Chen, Yuting Jiang, Yasen Hu

2025-01-24

Sigma: Differential Rescaling of Query, Key and Value for Efficient Language Models

Summary

This paper talks about Sigma, a new type of AI language model that's designed to be really good at understanding and working with computer systems. It uses a special technique called DiffQKV attention to work faster and more efficiently than other AI models, especially when dealing with long pieces of text.

What's the problem?

Current AI language models are great for many tasks, but they're not specifically designed for understanding complex computer systems. They can be slow when working with long texts, and they might not be as accurate as we need them to be for technical tasks related to managing and optimizing computer systems.

What's the solution?

The researchers created Sigma, which uses a new method called DiffQKV attention. This method changes how the AI processes information by treating different parts of its attention mechanism (Query, Key, and Value) in different ways. They compress some parts more than others to make the model faster without losing accuracy. They also trained Sigma on a huge amount of data specifically about computer systems, including some data they created themselves.

Why it matters?

This matters because it could make AI much better at helping with complex computer tasks. Imagine having an AI assistant that can quickly diagnose computer problems, optimize system settings, or manage large networks more efficiently than ever before. Sigma could help make computers and networks run smoother and faster, which is important as we rely more and more on technology in our daily lives. It's also a big step forward in making AI models that are specialized for specific, technical fields.

Abstract

We introduce Sigma, an efficient large language model specialized for the system domain, empowered by a novel architecture including DiffQKV attention, and pre-trained on our meticulously collected system domain data. DiffQKV attention significantly enhances the inference efficiency of Sigma by optimizing the Query (Q), Key (K), and Value (V) components in the attention mechanism differentially, based on their varying impacts on the model performance and efficiency indicators. Specifically, we (1) conduct extensive experiments that demonstrate the model's varying sensitivity to the compression of K and V components, leading to the development of differentially compressed KV, and (2) propose augmented Q to expand the Q head dimension, which enhances the model's representation capacity with minimal impacts on the inference speed. Rigorous theoretical and empirical analyses reveal that DiffQKV attention significantly enhances efficiency, achieving up to a 33.36% improvement in inference speed over the conventional grouped-query attention (GQA) in long-context scenarios. We pre-train Sigma on 6T tokens from various sources, including 19.5B system domain data that we carefully collect and 1T tokens of synthesized and rewritten data. In general domains, Sigma achieves comparable performance to other state-of-arts models. In the system domain, we introduce the first comprehensive benchmark AIMicius, where Sigma demonstrates remarkable performance across all tasks, significantly outperforming GPT-4 with an absolute improvement up to 52.5%.