VLM2Vec: Training Vision-Language Models for Massive Multimodal Embedding Tasks
Ziyan Jiang, Rui Meng, Xinyi Yang, Semih Yavuz, Yingbo Zhou, Wenhu Chen
2024-10-10

Summary
This paper discusses VLM2Vec, a new method for training vision-language models to create universal embeddings that can handle a variety of tasks involving both images and text.
What's the problem?
While embedding models are important for tasks like finding similar items or organizing information, creating universal models that work well across different tasks has been slow. Existing models often struggle to effectively combine images and text, which limits their usefulness in real-world applications.
What's the solution?
To solve this problem, the authors developed VLM2Vec, a framework that transforms state-of-the-art vision-language models into embedding models capable of performing multiple tasks. They also introduced the Massive Multimodal Embedding Benchmark (MMEB), which includes various tasks and datasets to test the models. VLM2Vec can process any combination of images and text and generates fixed-dimensional vectors based on specific instructions. The authors tested their model on these benchmarks and found that it significantly outperformed existing models by improving accuracy by 10% to 20%.
Why it matters?
This research is important because it enhances the ability of AI systems to understand and integrate information from different sources, like images and text. By creating more effective universal embedding models, VLM2Vec can improve applications in areas such as search engines, recommendation systems, and content analysis, ultimately making technology more intuitive and useful.
Abstract
Embedding models have been crucial in enabling various downstream tasks such as semantic similarity, information retrieval, and clustering. Recently, there has been a surge of interest in developing universal text embedding models that can generalize across tasks (e.g., MTEB). However, progress in learning universal multimodal embedding models has been relatively slow despite their importance. In this work, we aim to explore the potential for building universal embeddings capable of handling a wide range of downstream tasks. Our contributions are twofold: (1) MMEB (Massive Multimodal Embedding Benchmark), which covers 4 meta-tasks (i.e. classification, visual question answering, multimodal retrieval, and visual grounding) and 36 datasets, including 20 training and 16 evaluation datasets, and (2) VLM2Vec (Vision-Language Model -> Vector), a contrastive training framework that converts any state-of-the-art vision-language model into an embedding model via training on MMEB. Unlike previous models such as CLIP and BLIP, VLM2Vec can process any combination of images and text to generate a fixed-dimensional vector based on task instructions. We build a series of VLM2Vec models on Phi-3.5-V and evaluate them on MMEB's evaluation split. Our results show that \model achieves an absolute average improvement of 10% to 20% over existing multimodal embedding models on both in-distribution and out-of-distribution datasets in MMEB.