InfiniteHiP: Extending Language Model Context Up to 3 Million Tokens on a Single GPU
Heejun Lee, Geon Park, Jaduk Suh, Sung Ju Hwang
2025-02-14

Summary
This paper talks about InfiniteHiP, a new system that helps large AI language models handle extremely long texts, up to 3 million words, on a single GPU while working faster and using less memory.
What's the problem?
Large language models struggle with processing very long texts because it slows them down and uses a lot of computer memory. Most models are also limited to the text lengths they were originally trained on, so they can't handle much longer inputs effectively.
What's the solution?
The researchers created InfiniteHiP, which uses a method called hierarchical token pruning to remove unnecessary parts of the text while keeping the important ones. They also introduced adjustments to how the model understands the position of words in long texts and moved some memory storage off the GPU to save space. These improvements allow the model to process much longer texts without slowing down or losing accuracy.
Why it matters?
This matters because it allows AI models to handle large-scale tasks like analyzing long documents, books, or even entire conversations more efficiently. By making these models faster and more memory-efficient, InfiniteHiP could lead to better AI tools for research, education, and industries that rely on processing large amounts of information.
Abstract
In modern large language models (LLMs), handling very long context lengths presents significant challenges as it causes slower inference speeds and increased memory costs. Additionally, most existing pre-trained LLMs fail to generalize beyond their original training sequence lengths. To enable efficient and practical long-context utilization, we introduce InfiniteHiP, a novel, and practical LLM inference framework that accelerates processing by dynamically eliminating irrelevant context tokens through a modular hierarchical token pruning algorithm. Our method also allows generalization to longer sequences by selectively applying various RoPE adjustment methods according to the internal attention patterns within LLMs. Furthermore, we offload the key-value cache to host memory during inference, significantly reducing GPU memory pressure. As a result, InfiniteHiP enables the processing of up to 3 million tokens on a single L40s 48GB GPU -- 3x larger -- without any permanent loss of context information. Our framework achieves an 18.95x speedup in attention decoding for a 1 million token context without requiring additional training. We implement our method in the SGLang framework and demonstrate its effectiveness and practicality through extensive evaluations.