< Explain other AI papers

Are We Using the Right Benchmark: An Evaluation Framework for Visual Token Compression Methods

Chenfei Liao, Wensong Wang, Zichen Wen, Xu Zheng, Yiyu Wang, Haocong He, Yuanhuiyi Lyu, Lutao Jiang, Xin Zou, Yuqian Fu, Bin Ren, Linfeng Zhang, Xuming Hu

2025-10-09

Are We Using the Right Benchmark: An Evaluation Framework for Visual Token Compression Methods

Summary

This paper investigates how we evaluate methods that try to speed up Multimodal Large Language Models (MLLMs) – those are AI systems that can understand both images and text. The focus is on techniques that compress the visual information (images) the model processes.

What's the problem?

Currently, we judge how well these image compression techniques work by seeing if they lower the accuracy of the MLLM on standard tests. However, these tests were designed to see how *good* the MLLM is at understanding images and reasoning, not to specifically evaluate the compression methods themselves. This creates a mismatch, making it hard to tell if a compression method is truly effective or if the test is just flawed. Surprisingly, simply making images smaller (downsampling) often works better than more complex compression methods on these tests.

What's the solution?

The researchers realized the existing tests are unreliable for evaluating compression. They found that downsampling actually highlights which images are harder for the model to process *after* compression. Based on this, they created a new evaluation system called VTC-Bench. This system filters out confusing or noisy images from the standard tests, giving a more accurate and fair assessment of how well different compression techniques perform.

Why it matters?

This work is important because it provides a better way to measure the effectiveness of image compression techniques for MLLMs. By using a more reliable evaluation system, researchers can develop faster and more efficient MLLMs without sacrificing performance, ultimately leading to more practical and accessible AI systems.

Abstract

Recent endeavors to accelerate inference in Multimodal Large Language Models (MLLMs) have primarily focused on visual token compression. The effectiveness of these methods is typically assessed by measuring the accuracy drop on established benchmarks, comparing model performance before and after compression. However, these benchmarks are originally designed to assess the perception and reasoning capabilities of MLLMs, rather than to evaluate compression techniques. As a result, directly applying them to visual token compression introduces a task mismatch. Strikingly, our investigation reveals that simple image downsampling consistently outperforms many advanced compression methods across multiple widely used benchmarks. Through extensive experiments, we make the following observations: (i) Current benchmarks are noisy for the visual token compression task. (ii) Down-sampling is able to serve as a data filter to evaluate the difficulty of samples in the visual token compression task. Motivated by these findings, we introduce VTC-Bench, an evaluation framework that incorporates a data filtering mechanism to denoise existing benchmarks, thereby enabling fairer and more accurate assessment of visual token compression methods. All data and code are available at https://github.com/Chenfei-Liao/VTC-Bench.