< Explain other AI papers

E5-V: Universal Embeddings with Multimodal Large Language Models

Ting Jiang, Minghui Song, Zihan Zhang, Haizhen Huang, Weiwei Deng, Feng Sun, Qi Zhang, Deqing Wang, Fuzhen Zhuang

2024-07-18

E5-V: Universal Embeddings with Multimodal Large Language Models

Summary

This paper introduces E5-V, a new framework for creating universal multimodal embeddings using large language models (LLMs) that can effectively understand and represent different types of data, such as text and images.

What's the problem?

While multimodal large language models (MLLMs) have shown progress in understanding both visual and textual information, there hasn't been a comprehensive method to represent this multimodal information effectively. Current approaches often require complex training with both text and image data, which can be costly and time-consuming.

What's the solution?

E5-V addresses these challenges by proposing a single modality training approach, where the model is trained only on text pairs instead of requiring image-text pairs. This method not only simplifies the training process but also reduces costs by about 95%. E5-V demonstrates strong performance in creating multimodal embeddings without needing fine-tuning, effectively bridging the gap between different types of inputs. Extensive experiments show that E5-V outperforms traditional multimodal training methods across various tasks.

Why it matters?

This research is significant because it makes it easier and cheaper to develop models that can understand and process multiple types of data. By improving how multimodal information is represented, E5-V has the potential to enhance applications in areas like image retrieval, natural language processing, and more, making advanced AI technologies more accessible.

Abstract

Multimodal large language models (MLLMs) have shown promising advancements in general visual and language understanding. However, the representation of multimodal information using MLLMs remains largely unexplored. In this work, we introduce a new framework, E5-V, designed to adapt MLLMs for achieving universal multimodal embeddings. Our findings highlight the significant potential of MLLMs in representing multimodal inputs compared to previous approaches. By leveraging MLLMs with prompts, E5-V effectively bridges the modality gap between different types of inputs, demonstrating strong performance in multimodal embeddings even without fine-tuning. We propose a single modality training approach for E5-V, where the model is trained exclusively on text pairs. This method demonstrates significant improvements over traditional multimodal training on image-text pairs, while reducing training costs by approximately 95%. Additionally, this approach eliminates the need for costly multimodal training data collection. Extensive experiments across four types of tasks demonstrate the effectiveness of E5-V. As a universal multimodal model, E5-V not only achieves but often surpasses state-of-the-art performance in each task, despite being trained on a single modality.