Lossless Acceleration of Large Language Models with Hierarchical Drafting based on Temporal Locality in Speculative Decoding
Sukmin Cho, Sangjin Choi, Taeho Hwang, Jeongyeon Seo, Soyeong Jeong, Huije Lee, Hoyun Song, Jong C. Park, Youngjin Kwon
2025-02-11

Summary
This paper talks about a new method called Hierarchy Drafting (HD) that makes large language models (LLMs) work faster without losing accuracy. It uses a smart way of organizing and predicting words to speed up how quickly these AI models can generate text.
What's the problem?
Large language models are really useful for many tasks, but they can be slow when generating responses in real-time. Current methods to speed them up either need a lot of fine-tuning or don't work well for all types of tasks.
What's the solution?
The researchers created Hierarchy Drafting, which organizes words into different levels based on how recently and frequently they've been used. When the AI needs to predict the next word, it checks these levels from most recent to least recent. This helps the model make faster predictions that work well across different tasks without needing extra training.
Why it matters?
This matters because it could make AI language models respond much faster in real-world applications, like chatbots or virtual assistants, without sacrificing the quality of their responses. Faster AI models could lead to more responsive and efficient AI services in various fields, improving user experience and reducing computational costs.
Abstract
Accelerating inference in Large Language Models (LLMs) is critical for real-time interactions, as they have been widely incorporated into real-world services. Speculative decoding, a fully algorithmic solution, has gained attention for improving inference speed by drafting and verifying tokens, thereby generating multiple tokens in a single forward pass. However, current drafting strategies usually require significant fine-tuning or have inconsistent performance across tasks. To address these challenges, we propose Hierarchy Drafting (HD), a novel lossless drafting approach that organizes various token sources into multiple databases in a hierarchical framework based on temporal locality. In the drafting step, HD sequentially accesses multiple databases to obtain draft tokens from the highest to the lowest locality, ensuring consistent acceleration across diverse tasks and minimizing drafting latency. Our experiments on Spec-Bench using LLMs with 7B and 13B parameters demonstrate that HD outperforms existing database drafting methods, achieving robust inference speedups across model sizes, tasks, and temperatures.