The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?
Zhenheng Tang, Xiang Liu, Qian Wang, Peijie Dong, Bingsheng He, Xiaowen Chu, Bo Li
2025-02-26
Summary
This paper talks about a new idea called the Lottery LLM Hypothesis, which suggests that we can make AI language models smaller and more efficient without losing their abilities
What's the problem?
Big AI language models are really good at tasks, but they take up a lot of computer power and storage. Current ways of making these models smaller focus on keeping them good at simple tasks, but they might be losing other important abilities in the process
What's the solution?
The researchers propose the Lottery LLM Hypothesis, which says that for any big AI model, there's a smaller version that can do just as well if it's given the right tools and ways to think through problems step-by-step. They look at recent improvements in AI, like using external information and breaking down complex tasks, to figure out what abilities these smaller models need to keep
Why it matters?
This matters because it could lead to AI models that are just as smart but use less energy and storage. This would make advanced AI more accessible and environmentally friendly. It also challenges how we think about making AI models smaller, suggesting we need to focus on preserving a wider range of abilities, not just performance on simple tasks
Abstract
Motivated by reducing the computational and storage costs of LLMs, model compression and KV cache compression have attracted much attention from researchers. However, current methods predominantly emphasize maintaining the performance of compressed LLMs, as measured by perplexity or simple accuracy on tasks of common sense knowledge QA and basic arithmetic reasoning. In this blog, we present a brief review of recent advancements in LLMs related to retrieval-augmented generation, multi-step reasoning, external tools, and computational expressivity, all of which substantially enhance LLM performance. Then, we propose a lottery LLM hypothesis suggesting that for a given LLM and task, there exists a smaller lottery LLM capable of producing the same performance as the original LLM with the assistance of multi-step reasoning and external tools. Based on the review of current progress in LLMs, we discuss and summarize the essential capabilities that the lottery LLM and KV cache compression must possess, which are currently overlooked in existing methods.