SpeContext: Enabling Efficient Long-context Reasoning with Speculative Context Sparsity in LLMs
Jiaming Xu, Jiayi Pan, Hanzhen Wang, Yongkang Zhou, Jiancai Ye, Yu Wang, Guohao Dai
2025-12-02
Summary
This paper introduces a new way to speed up how large language models (LLMs) access information, focusing on making long-context reasoning faster and more efficient, especially when resources are limited.
What's the problem?
Large language models need to be able to quickly find and use relevant information from a large amount of text, which is called 'long-context reasoning'. Current methods for retrieving this information are slow and require a lot of computing power and memory, making it difficult to use these models on devices with limited resources like phones or in cloud environments where speed is critical.
What's the solution?
The researchers realized that the way information is condensed in smaller, 'distilled' language models is similar to how retrieval algorithms work. They built a system called SpeContext that uses a smaller, distilled model *as* the retrieval algorithm. This involves three main parts: a simplified way to find important information within the distilled model, a system for quickly loading data while the LLM is working, and a clever memory management system to maximize the use of the computer's memory. They optimized this system for both cloud servers and smaller edge devices.
Why it matters?
SpeContext significantly improves the speed of LLMs – up to 24.89 times faster in the cloud and 10.06 times faster on edge devices – without sacrificing accuracy. This means we can use powerful language models more easily and efficiently in a wider range of applications, even on devices with limited resources, and it pushes the boundaries of what's possible in terms of both speed and accuracy.
Abstract
In this paper, we point out that the objective of the retrieval algorithms is to align with the LLM, which is similar to the objective of knowledge distillation in LLMs. We analyze the similarity in information focus between the distilled language model(DLM) and the original LLM from the perspective of information theory, and thus propose a novel paradigm that leverages a DLM as the retrieval algorithm. Based on the insight, we present SpeContext, an algorithm and system co-design for long-context reasoning. (1) At the algorithm level, SpeContext proposes lightweight retrieval head based on the head-level attention weights of DLM, achieving > 90% parameters reduction by pruning the redundancy. (2) At the system level, SpeContext designs an asynchronous prefetch dataflow via the elastic loading strategy, effectively overlapping KV cache retrieval with the LLM computation. (3) At the compilation level, SpeContext constructs the theoretical memory model and implements an adaptive memory management system to achieve acceleration by maximizing GPU memory utilization. We deploy and evaluate SpeContext in two resourceconstrained environments, cloud and edge. Extensive experiments show that, compared with the Huggingface framework, SpeContext achieves up to 24.89x throughput improvement in cloud and 10.06x speedup in edge with negligible accuracy loss, pushing the Pareto frontier of accuracy and throughput.