MagicDec: Breaking the Latency-Throughput Tradeoff for Long Context Generation with Speculative Decoding
Jian Chen, Vashisth Tiwari, Ranajoy Sadhukhan, Zhuoming Chen, Jinyuan Shi, Ian En-Hsu Yen, Beidi Chen
2024-08-21

Summary
This paper introduces MagicDec, a new approach that improves how quickly and efficiently language models can generate long text by using a technique called speculative decoding.
What's the problem?
When using large language models (LLMs) for tasks that require generating long texts, it can be slow and inefficient. Traditional methods often struggle to balance speed (low latency) and the ability to handle many requests at once (high throughput), making it hard to use these models effectively in real-time applications.
What's the solution?
MagicDec shows that speculative decoding can be effective even when processing many requests at once, not just with small batches. The model identifies where bottlenecks occur as the size of the batch and the length of the text increase. It then uses this information to apply speculative decoding more effectively, which helps speed up the generation process without sacrificing quality. Additionally, it uses draft models to manage memory better and improve performance.
Why it matters?
This research is important because it allows for faster and more efficient text generation with LLMs, making them more practical for real-world applications like chatbots and document analysis. By improving how these models work, we can enhance user experiences in various fields that rely on quick and accurate text responses.
Abstract
Large Language Models (LLMs) have become more prevalent in long-context applications such as interactive chatbots, document analysis, and agent workflows, but it is challenging to serve long-context requests with low latency and high throughput. Speculative decoding (SD) is a widely used technique to reduce latency without sacrificing performance but the conventional wisdom suggests that its efficacy is limited to small batch sizes. In MagicDec, we show that surprisingly SD can achieve speedup even for a high throughput inference regime for moderate to long sequences. More interestingly, an intelligent drafting strategy can achieve better speedup with increasing batch size based on our rigorous analysis. MagicDec first identifies the bottleneck shifts with increasing batch size and sequence length, and uses these insights to deploy speculative decoding more effectively for high throughput inference. Then, it leverages draft models with sparse KV cache to address the KV bottleneck that scales with both sequence length and batch size.