Swift-SVD: Theoretical Optimality Meets Practical Efficiency in Low-Rank LLM Compression
Ruoling Qi, Yirui Liu, Xuaner Wu, Xiangyu Wang, Ming Li, Chen Chen, Jian Chen, Yin Chen, Qizhen Weng
2026-04-06
Summary
This paper introduces a new method called Swift-SVD to make large language models, like those powering chatbots, run faster and use less memory.
What's the problem?
Large language models are huge and require a lot of memory and processing power, making them difficult to deploy and use efficiently. A key part of this issue comes from the massive amount of data, called weights and the key-value cache, they need to store and access. Existing methods to compress these models either don't compress them very well, or they are too slow to be practical, even if they are theoretically the best possible compression.
What's the solution?
The researchers developed Swift-SVD, a technique that compresses the model's data in a way that's both highly effective and fast. It works by analyzing how the model's outputs change with different inputs and then finding a simplified representation of that data. This simplification is done incrementally, meaning it updates the compression as it sees more data, and it only requires one complex calculation after processing a batch of inputs. They also figured out a smart way to decide how much to compress each part of the model, balancing compression rate with how important that part is to the overall performance.
Why it matters?
This work is important because it allows large language models to be used more easily on devices with limited resources, like phones or laptops, and it speeds up how quickly these models can respond. The experiments showed Swift-SVD is better than existing compression methods, achieving good compression accuracy while being significantly faster to compress the models.
Abstract
The deployment of Large Language Models is constrained by the memory and bandwidth demands of static weights and dynamic Key-Value cache. SVD-based compression provides a hardware-friendly solution to reduce these costs. However, existing methods suffer from two key limitations: some are suboptimal in reconstruction error, while others are theoretically optimal but practically inefficient. In this paper, we propose Swift-SVD, an activation-aware, closed-form compression framework that simultaneously guarantees theoretical optimum, practical efficiency and numerical stability. Swift-SVD incrementally aggregates covariance of output activations given a batch of inputs and performs a single eigenvalue decomposition after aggregation, enabling training-free, fast, and optimal layer-wise low-rank approximation. We employ effective rank to analyze local layer-wise compressibility and design a dynamic rank allocation strategy that jointly accounts for local reconstruction loss and end-to-end layer importance. Extensive experiments across six LLMs and eight datasets demonstrate that Swift-SVD outperforms state-of-the-art baselines, achieving optimal compression accuracy while delivering 3-70X speedups in end-to-end compression time. Our code will be released upon acceptance.