< Explain other AI papers

ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference

Hanshi Sun, Li-Wen Chang, Wenlei Bao, Size Zheng, Ningxin Zheng, Xin Liu, Harry Dong, Yuejie Chi, Beidi Chen

2024-10-30

ShadowKV: KV Cache in Shadows for High-Throughput Long-Context LLM Inference

Summary

This paper introduces ShadowKV, a new system designed to improve the performance of large language models (LLMs) when processing long contexts by efficiently managing memory and reducing delays during inference.

What's the problem?

As LLMs are used for longer texts, the memory required to store key-value (KV) caches increases, which can slow down the model's performance. Traditional methods either take up too much memory or slow down the process because they have to access data from the CPU instead of the faster GPU. This makes it challenging to maintain high throughput when generating responses.

What's the solution?

The authors present ShadowKV, which optimizes how KV caches are stored and accessed. Instead of keeping all data in memory, ShadowKV uses a method that stores only essential parts of the cache while offloading less critical information. It also employs a smart selection strategy to quickly identify which KV pairs are needed for each task, significantly speeding up processing times without sacrificing accuracy. Their tests show that ShadowKV can handle much larger batches of data and improve processing speed by over three times compared to previous methods.

Why it matters?

This research is important because it allows LLMs to work more efficiently with long texts, making them faster and more effective for various applications like chatbots, content generation, and more complex tasks. By enhancing how these models manage memory and processing, ShadowKV can help improve user experiences in AI applications.

Abstract

With the widespread deployment of long-context large language models (LLMs), there has been a growing demand for efficient support of high-throughput inference. However, as the key-value (KV) cache expands with the sequence length, the increasing memory footprint and the need to access it for each token generation both result in low throughput when serving long-context LLMs. While various dynamic sparse attention methods have been proposed to speed up inference while maintaining generation quality, they either fail to sufficiently reduce GPU memory consumption or introduce significant decoding latency by offloading the KV cache to the CPU. We present ShadowKV, a high-throughput long-context LLM inference system that stores the low-rank key cache and offloads the value cache to reduce the memory footprint for larger batch sizes and longer sequences. To minimize decoding latency, ShadowKV employs an accurate KV selection strategy that reconstructs minimal sparse KV pairs on-the-fly. By evaluating ShadowKV on a broad range of benchmarks, including RULER, LongBench, and Needle In A Haystack, and models like Llama-3.1-8B, Llama-3-8B-1M, GLM-4-9B-1M, Yi-9B-200K, Phi-3-Mini-128K, and Qwen2-7B-128K, we demonstrate that it can support up to 6times larger batch sizes and boost throughput by up to 3.04times on an A100 GPU without sacrificing accuracy, even surpassing the performance achievable with infinite batch size under the assumption of infinite GPU memory. The code is available at https://github.com/bytedance/ShadowKV.