HeadInfer: Memory-Efficient LLM Inference by Head-wise Offloading
Cheng Luo, Zefan Cai, Hanshi Sun, Jinqi Xiao, Bo Yuan, Wen Xiao, Junjie Hu, Jiawei Zhao, Beidi Chen, Anima Anandkumar
2025-02-19
Summary
This paper talks about HeadInfer, a new method for making large language models (LLMs) work more efficiently with less memory. It's like finding a way to run a powerful computer program on a regular laptop instead of needing a supercomputer.
What's the problem?
As AI language models get better at handling long texts, they need more and more memory to work. This is especially true for something called the key-value cache, which helps the AI remember important information. The problem is that most computers, even good gaming ones, don't have enough memory to run these advanced AI models, especially when dealing with very long texts.
What's the solution?
The researchers created HeadInfer, which cleverly manages the AI's memory. Instead of keeping all the information in the computer's fast memory (GPU), it moves most of it to the slower but much larger regular memory (RAM). HeadInfer is smart about how it does this, only keeping the most important bits in the fast memory and swapping things in and out as needed. This lets the AI work with much longer texts than before, without needing super expensive hardware.
Why it matters?
This matters because it makes advanced AI more accessible. With HeadInfer, an AI model that normally needs a supercomputer can now run on a regular high-end gaming computer. It can handle texts that are millions of words long, which is way more than before. This could make powerful AI tools available to more people and businesses, potentially leading to new applications and discoveries in fields that work with large amounts of text data.
Abstract
Transformer-based large language models (LLMs) demonstrate impressive performance in long context generation. Extending the context length has disproportionately shifted the memory footprint of LLMs during inference to the key-value cache (KV cache). In this paper, we propose HEADINFER, which offloads the KV cache to CPU RAM while avoiding the need to fully store the KV cache for any transformer layer on the GPU. HEADINFER employs a fine-grained, head-wise offloading strategy, maintaining only selective attention heads KV cache on the GPU while computing attention output dynamically. Through roofline analysis, we demonstrate that HEADINFER maintains computational efficiency while significantly reducing memory footprint. We evaluate HEADINFER on the Llama-3-8B model with a 1-million-token sequence, reducing the GPU memory footprint of the KV cache from 128 GB to 1 GB and the total GPU memory usage from 207 GB to 17 GB, achieving a 92% reduction compared to BF16 baseline inference. Notably, HEADINFER enables 4-million-token inference with an 8B model on a single consumer GPU with 24GB memory (e.g., NVIDIA RTX 4090) without approximation methods.