< Explain other AI papers

ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning

Zhiwei Hao, Jianyuan Guo, Li Shen, Yong Luo, Han Hu, Yonggang Wen

2024-10-25

ADEM-VL: Adaptive and Embedded Fusion for Efficient Vision-Language Tuning

Summary

This paper introduces ADEM-VL, a new method for efficiently tuning vision-language models that can understand both images and text, making them faster and easier to use.

What's the problem?

Vision-language models are powerful tools that can perform tasks like image captioning and answering questions about pictures, but they require a lot of computing power and memory. This is because they need to process long sequences of text combined with visual information, which can slow down their performance and make them harder to use in real-world applications.

What's the solution?

To solve these problems, ADEM-VL uses a parameter-free cross-attention mechanism that helps the model combine visual and textual information without adding extra parameters that would increase memory use. It also employs an adaptive fusion scheme that focuses on the most relevant visual features for each piece of text. This means the model can learn and adapt more efficiently, allowing it to work faster while still producing high-quality results. The authors tested ADEM-VL on various tasks and found it outperformed existing methods, achieving better accuracy while reducing training time.

Why it matters?

This research is important because it makes advanced AI models more accessible by improving their efficiency. By allowing these models to process information faster and with less resource use, ADEM-VL can help developers create better applications in areas like education, healthcare, and entertainment that rely on understanding both images and text.

Abstract

Recent advancements in multimodal fusion have witnessed the remarkable success of vision-language (VL) models, which excel in various multimodal applications such as image captioning and visual question answering. However, building VL models requires substantial hardware resources, where efficiency is restricted by two key factors: the extended input sequence of the language model with vision features demands more computational operations, and a large number of additional learnable parameters increase memory complexity. These challenges significantly restrict the broader applicability of such models. To bridge this gap, we propose ADEM-VL, an efficient vision-language method that tunes VL models based on pretrained large language models (LLMs) by adopting a parameter-free cross-attention mechanism for similarity measurements in multimodal fusion. This approach only requires embedding vision features into the language space, significantly reducing the number of trainable parameters and accelerating both training and inference speeds. To enhance representation learning in fusion module, we introduce an efficient multiscale feature generation scheme that requires only a single forward pass through the vision encoder. Moreover, we propose an adaptive fusion scheme that dynamically discards less relevant visual information for each text token based on its attention score. This ensures that the fusion process prioritizes the most pertinent visual features. With experiments on various tasks including visual question answering, image captioning, and instruction-following, we demonstrate that our framework outperforms existing approaches. Specifically, our method surpasses existing methods by an average accuracy of 0.77% on ScienceQA dataset, with reduced training and inference latency, demonstrating the superiority of our framework. The code is available at https://github.com/Hao840/ADEM-VL.