LongCodeZip: Compress Long Context for Code Language Models
Yuling Shi, Yichun Qian, Hongyu Zhang, Beijun Shen, Xiaodong Gu
2025-10-05
Summary
This paper introduces a new method called LongCodeZip to help large language models (LLMs) work with very large amounts of code more efficiently, without losing accuracy.
What's the problem?
LLMs are getting better at understanding and generating code, but they struggle when dealing with huge codebases. Processing all that code is expensive in terms of computing power and takes a lot of time. Existing methods to shorten the code given to the LLM don't really understand how code is structured, so they can accidentally remove important parts, making the LLM perform worse.
What's the solution?
LongCodeZip tackles this by compressing code in two steps. First, it looks at the whole codebase and identifies which functions are most relevant to the task at hand, keeping those and discarding the rest. Then, within those important functions, it breaks the code down into smaller blocks and smartly selects the most important blocks to keep, making sure it stays within a certain length limit. It does this by measuring how 'surprising' each part of the code is given the instructions.
Why it matters?
This work is important because it allows LLMs to handle real-world software projects, which are often very large. By making code processing faster and cheaper, LongCodeZip helps improve the practicality of using LLMs for tasks like code completion, summarizing code, and answering questions about code, ultimately making software development more efficient.
Abstract
Code generation under long contexts is becoming increasingly critical as Large Language Models (LLMs) are required to reason over extensive information in the codebase. While recent advances enable code LLMs to process long inputs, high API costs and generation latency remain substantial bottlenecks. Existing context pruning techniques, such as LLMLingua, achieve promising results for general text but overlook code-specific structures and dependencies, leading to suboptimal performance in programming tasks. In this paper, we propose LongCodeZip, a novel plug-and-play code compression framework designed specifically for code LLMs. LongCodeZip employs a dual-stage strategy: (1) coarse-grained compression, which identifies and ranks function-level chunks using conditional perplexity with respect to the instruction, retaining only the most relevant functions; and (2) fine-grained compression, which segments retained functions into blocks based on perplexity and selects an optimal subset under an adaptive token budget to maximize relevance. Evaluations across multiple tasks, including code completion, summarization, and question answering, show that LongCodeZip consistently outperforms baseline methods, achieving up to a 5.6x compression ratio without degrading task performance. By effectively reducing context size while preserving essential information, LongCodeZip enables LLMs to better scale to real-world, large-scale code scenarios, advancing the efficiency and capability of code intelligence applications.