< Explain other AI papers

When Less is Enough: Adaptive Token Reduction for Efficient Image Representation

Eduard Allakhverdov, Elizaveta Goncharova, Andrey Kuznetsov

2025-03-24

When Less is Enough: Adaptive Token Reduction for Efficient Image
  Representation

Summary

This paper explores how to make AI models that process images more efficient by getting rid of unimportant information.

What's the problem?

AI models that process images often use a lot of computing power because they look at every detail, even if some details aren't important.

What's the solution?

The researchers created a way for the AI to figure out which details are most important and ignore the rest, making it faster and more efficient.

Why it matters?

This work matters because it can make AI image processing more practical for use on phones and other devices with limited computing power.

Abstract

Vision encoders typically generate a large number of visual tokens, providing information-rich representations but significantly increasing computational demands. This raises the question of whether all generated tokens are equally valuable or if some of them can be discarded to reduce computational costs without compromising quality. In this paper, we introduce a new method for determining feature utility based on the idea that less valuable features can be reconstructed from more valuable ones. We implement this concept by integrating an autoencoder with a Gumbel-Softmax selection mechanism, that allows identifying and retaining only the most informative visual tokens. To validate our approach, we compared the performance of the LLaVA-NeXT model, using features selected by our method with randomly selected features. We found that on OCR-based tasks, more than 50% of the visual context can be removed with minimal performance loss, whereas randomly discarding the same proportion of features significantly affects the model capabilities. Furthermore, in general-domain tasks, even randomly retaining only 30% of tokens achieves performance comparable to using the full set of visual tokens. Our results highlight a promising direction towards adaptive and efficient multimodal pruning that facilitates scalable and low-overhead inference without compromising performance.