UniQL: Unified Quantization and Low-rank Compression for Adaptive Edge LLMs
Hung-Yueh Chiang, Chi-Chih Chang, Yu-Chen Lu, Chien-Yu Lin, Kai-Chiang Wu, Mohamed S. Abdelfattah, Diana Marculescu
2025-12-04
Summary
This paper introduces UniQL, a new system for making large language models, or LLMs, work better on phones and other mobile devices with limited resources.
What's the problem?
Large language models are huge and require a lot of memory and processing power, making it difficult to run them directly on mobile devices. Mobile devices have limited resources, and those resources can change depending on what else the phone is doing, making it even harder to reliably use these models. Essentially, the models are too big and demanding for most phones to handle efficiently.
What's the solution?
The researchers developed UniQL, which shrinks the size of these models using a few key techniques. First, they reduce the precision of the numbers the model uses (quantization). Second, they simplify the model's structure by removing less important connections (pruning). They also use a clever way to organize the remaining information to speed things up. Importantly, UniQL lets you adjust how much pruning is done *on the phone itself*, adapting to the device's current workload. All the heavy lifting of preparing the model is done beforehand in the cloud, making it fast and easy to deploy.
Why it matters?
This work is important because it allows more people to access the power of large language models directly on their phones without needing a constant internet connection or a super-powerful device. By making these models smaller and faster, UniQL opens up possibilities for new mobile applications that use AI, like improved voice assistants, better translation tools, and more personalized experiences, all while maintaining good accuracy.
Abstract
Deploying large language model (LLM) models on mobile platforms faces significant challenges due to the limited memory and shared computational resources of the device. Resource availability may be an issue as it is directly impacted by the current device workload, adding to the uncertainty of model deployment. We introduce UniQL, a unified post-training quantization and low-rank compression framework with on-device configurable pruning rates for edge LLMs. UniQL is a general framework that integrates quantization and low-rank compression for Transformers, State Space Models (SSMs), and hybrid models to support diverse edge applications. In our proposed joint framework, we introduce an efficient structured weight-sorting method that speeds up computation by 20x, quantization-aware singular value decomposition (SVD) to minimize quantization errors, state-aware weight sorting for SSMs, and a fused rotary positional embedding (RoPE) kernel for pruned models. Our framework performs weight-sorting, fine-tuning, and quantization in the cloud in a single-pass workflow, while enabling on-device configurable pruning rates up to 35%. Our experiments show that quantized and pruned models achieve a memory reduction of 4x-5.7x and a token-throughput improvement of 2.7x-3.4x, maintaining accuracy within 5% of the original models at 15% pruning across Transformers (Llama3 and Qwen2.5), SSMs (Mamba2), and hybrid models (Nemotron-H and Bamba-v2). The code and quantized models are available at: https://github.com/enyac-group/UniQL.