LLaVE: Large Language and Vision Embedding Models with Hardness-Weighted Contrastive Learning
Zhibin Lan, Liqiang Niu, Fandong Meng, Jie Zhou, Jinsong Su
2025-03-11
Summary
This paper talks about LLaVE, an AI tool that helps match images with text by focusing on tricky examples where the wrong matches look very similar to the right ones, making the AI smarter at telling them apart.
What's the problem?
Current AI models get confused when wrong image-text pairs look almost like correct matches, leading to mistakes in tasks like finding the right image for a search query.
What's the solution?
LLaVE uses a special training method that pays extra attention to these tricky ‘almost correct’ wrong answers, helping the AI learn to spot subtle differences and make better matches.
Why it matters?
This improves tools like image search engines and AI assistants, making them faster and more accurate at understanding how pictures and words connect, even with less data and computing power.
Abstract
Universal multimodal embedding models play a critical role in tasks such as interleaved image-text retrieval, multimodal RAG, and multimodal clustering. However, our empirical results indicate that existing LMM-based embedding models trained with the standard InfoNCE loss exhibit a high degree of overlap in similarity distribution between positive and negative pairs, making it challenging to distinguish hard negative pairs effectively. To deal with this issue, we propose a simple yet effective framework that dynamically improves the embedding model's representation learning for negative pairs based on their discriminative difficulty. Within this framework, we train a series of models, named LLaVE, and evaluate them on the MMEB benchmark, which covers 4 meta-tasks and 36 datasets. Experimental results show that LLaVE establishes stronger baselines that achieve state-of-the-art (SOTA) performance while demonstrating strong scalability and efficiency. Specifically, LLaVE-2B surpasses the previous SOTA 7B models, while LLaVE-7B achieves a further performance improvement of 6.2 points. Although LLaVE is trained on image-text data, it can generalize to text-video retrieval tasks in a zero-shot manner and achieve strong performance, demonstrating its remarkable potential for transfer to other embedding tasks.