LLaVA-Mini: Efficient Image and Video Large Multimodal Models with One Vision Token
Shaolei Zhang, Qingkai Fang, Zhe Yang, Yang Feng
2025-01-08

Summary
This paper talks about LLaVA-Mini, a new AI model that can understand images and videos more efficiently than previous models.
What's the problem?
Current AI models that work with both text and images (called multimodal models) use a lot of computer power and memory. This is because they convert images into many 'vision tokens', which are like digital pieces of the image. Using so many tokens makes these models slow and expensive to run.
What's the solution?
The researchers created LLaVA-Mini, which works differently. Instead of using many vision tokens, it combines image information with text information early in the process. This allows LLaVA-Mini to use just one vision token per image, which is much more efficient. They tested LLaVA-Mini on many different tasks and found it works as well as or better than older models that use 576 vision tokens, while being much faster and using less memory.
Why it matters?
This matters because it makes AI that can understand images and videos much more practical to use. LLaVA-Mini can process images and videos much faster and can handle longer videos without running out of memory. This could lead to better AI assistants, more efficient video processing, and new applications on devices with limited computing power, like phones or small robots. It's a big step towards making advanced AI more accessible and useful in everyday situations.
Abstract
The advent of real-time large multimodal models (LMMs) like GPT-4o has sparked considerable interest in efficient LMMs. LMM frameworks typically encode visual inputs into vision tokens (continuous representations) and integrate them and textual instructions into the context of large language models (LLMs), where large-scale parameters and numerous context tokens (predominantly vision tokens) result in substantial computational overhead. Previous efforts towards efficient LMMs always focus on replacing the LLM backbone with smaller models, while neglecting the crucial issue of token quantity. In this paper, we introduce LLaVA-Mini, an efficient LMM with minimal vision tokens. To achieve a high compression ratio of vision tokens while preserving visual information, we first analyze how LMMs understand vision tokens and find that most vision tokens only play a crucial role in the early layers of LLM backbone, where they mainly fuse visual information into text tokens. Building on this finding, LLaVA-Mini introduces modality pre-fusion to fuse visual information into text tokens in advance, thereby facilitating the extreme compression of vision tokens fed to LLM backbone into one token. LLaVA-Mini is a unified large multimodal model that can support the understanding of images, high-resolution images, and videos in an efficient manner. Experiments across 11 image-based and 7 video-based benchmarks demonstrate that LLaVA-Mini outperforms LLaVA-v1.5 with just 1 vision token instead of 576. Efficiency analyses reveal that LLaVA-Mini can reduce FLOPs by 77%, deliver low-latency responses within 40 milliseconds, and process over 10,000 frames of video on the GPU hardware with 24GB of memory.