VisionZip: Longer is Better but Not Necessary in Vision Language Models
Senqiao Yang, Yukang Chen, Zhuotao Tian, Chengyao Wang, Jingyao Li, Bei Yu, Jiaya Jia
2024-12-06

Summary
This paper talks about VisionZip, a new method that improves the efficiency of vision-language models by reducing the number of visual tokens they use, which makes them faster and less costly to run while keeping their performance high.
What's the problem?
Vision-language models have been using very long visual tokens, which are sequences of information from images. This makes the models slower and requires more computing power because many of these tokens don't add much useful information, leading to unnecessary redundancy.
What's the solution?
The authors introduced VisionZip, which smartly selects only the most important visual tokens to use in the model. By doing this, it reduces the amount of redundant information and significantly speeds up processing times. VisionZip has been shown to improve performance by at least 5% compared to previous methods and can make models run up to 8 times faster.
Why it matters?
This research is important because it not only makes vision-language models more efficient and faster but also encourages future work to focus on selecting better visual features instead of just increasing the length of token sequences. This can lead to better applications in real-world scenarios, such as understanding images and videos or engaging in multi-turn conversations.
Abstract
Recent advancements in vision-language models have enhanced performance by increasing the length of visual tokens, making them much longer than text tokens and significantly raising computational costs. However, we observe that the visual tokens generated by popular vision encoders, such as CLIP and SigLIP, contain significant redundancy. To address this, we introduce VisionZip, a simple yet effective method that selects a set of informative tokens for input to the language model, reducing visual token redundancy and improving efficiency while maintaining model performance. The proposed VisionZip can be widely applied to image and video understanding tasks and is well-suited for multi-turn dialogues in real-world scenarios, where previous methods tend to underperform. Experimental results show that VisionZip outperforms the previous state-of-the-art method by at least 5% performance gains across nearly all settings. Moreover, our method significantly enhances model inference speed, improving the prefilling time by 8x and enabling the LLaVA-Next 13B model to infer faster than the LLaVA-Next 7B model while achieving better results. Furthermore, we analyze the causes of this redundancy and encourage the community to focus on extracting better visual features rather than merely increasing token length. Our code is available at https://github.com/dvlab-research/VisionZip .