< Explain other AI papers

TokenPacker: Efficient Visual Projector for Multimodal LLM

Wentong Li, Yuqian Yuan, Jian Liu, Dongqi Tang, Song Wang, Jianke Zhu, Lei Zhang

2024-07-04

TokenPacker: Efficient Visual Projector for Multimodal LLM

Summary

This paper talks about TokenPacker, a new system designed to improve how visual information is processed in multimodal large language models (MLLMs) by efficiently compressing visual tokens while maintaining high quality.

What's the problem?

The main problem is that when working with high-resolution images, the amount of visual data (or tokens) can become overwhelming for MLLMs. Traditional methods use simple techniques that often result in too many redundant tokens, which can slow down processing and reduce the model's ability to understand detailed visual information.

What's the solution?

To solve this issue, the authors developed TokenPacker, which uses a 'coarse-to-fine' approach. This means it starts with a low-resolution version of the visual data and gradually enhances it by injecting detailed information from high-resolution cues. Specifically, it first creates a basic representation of the image and then adds finer details from specific regions of the image to improve understanding. This method allows TokenPacker to reduce the number of visual tokens by 75% to 89% while still achieving strong performance in various tasks.

Why it matters?

This research is important because it makes it easier and faster for AI models to process complex visual information without losing important details. By improving efficiency in handling visual data, TokenPacker can enhance applications in fields like computer vision, robotics, and any area where understanding images alongside text is crucial.

Abstract

The visual projector serves as an essential bridge between the visual encoder and the Large Language Model (LLM) in a Multimodal LLM (MLLM). Typically, MLLMs adopt a simple MLP to preserve all visual contexts via one-to-one transformation. However, the visual tokens are redundant and can be considerably increased when dealing with high-resolution images, impairing the efficiency of MLLMs significantly. Some recent works have introduced resampler or abstractor to reduce the number of resulting visual tokens. Unfortunately, they fail to capture finer details and undermine the visual reasoning capabilities of MLLMs. In this work, we propose a novel visual projector, which adopts a coarse-to-fine scheme to inject the enriched characteristics to generate the condensed visual tokens. In specific, we first interpolate the visual features as a low-resolution point query, providing the overall visual representation as the foundation. Then, we introduce a region-to-point injection module that utilizes high-resolution, multi-level region-based cues as fine-grained reference keys and values, allowing them to be fully absorbed within the corresponding local context region. This step effectively updates the coarse point query, transforming it into an enriched one for the subsequent LLM reasoning. Extensive experiments demonstrate that our approach compresses the visual tokens by 75%~89%, while achieves comparable or even better performance across diverse benchmarks with significantly higher efficiency. The source codes can be found at https://github.com/CircleRadon/TokenPacker.