Script: Graph-Structured and Query-Conditioned Semantic Token Pruning for Multimodal Large Language Models
Zhongyu Yang, Dannong Xu, Wei Pang, Yingfang Yuan
2025-12-02
Summary
This paper introduces a new method called Script to make large AI models that process both images and text more efficient, specifically by reducing the amount of visual information they need to handle.
What's the problem?
Modern AI models that understand images and text, called multimodal large language models, are getting really big because they need to process a lot of visual 'tokens' representing parts of images and videos. This leads to two main problems: they take up a ton of computer memory and are slow to respond, especially with high-quality images or videos. Existing methods to reduce this visual information often either don't consider what the user is actually asking about, or they aren't very flexible and don't work well across different AI models.
What's the solution?
The researchers developed Script, which works like a two-step filter for visual information. First, it removes visually similar or redundant parts of an image or video. Second, it keeps the parts that are most relevant to the user's question or request. Importantly, Script doesn't require any additional training of the AI model, meaning it can be easily added to existing systems. It's designed to work with many different types of multimodal AI models.
Why it matters?
This work is important because it allows us to use powerful AI models that understand images and text without needing massive amounts of computing power. By making these models more efficient, they can run faster and on less expensive hardware, making them more accessible and practical for a wider range of applications. The experiments showed significant speedups and reductions in computational effort while maintaining high accuracy.
Abstract
The rapid growth of visual tokens in multimodal large language models (MLLMs) leads to excessive memory consumption and inference latency, especially when handling high-resolution images and videos. Token pruning is a technique used to mitigate this issue by removing redundancy, but existing methods often ignore relevance to the user query or suffer from the limitations of attention mechanisms, reducing their adaptability and effectiveness. To address these challenges, we propose Script, a plug-and-play pruning method that requires no retraining and generalizes across diverse MLLMs. Script comprises two modules: a graph-structured pruning module that removes visually redundant tokens, and a query-conditioned semantic pruning module that preserves query-relevant visual information. Together, they enhance performance on multimodal tasks. Experiments on fourteen benchmarks across image and video understanding tasks show that Script consistently achieves higher model efficiency and predictive accuracy compared to existing pruning methods. On LLaVA-NeXT-7B, it achieves up to 6.8x prefill speedup and 10x FLOP reduction, while retaining 96.88% of the original performance.