Symmetrical Visual Contrastive Optimization: Aligning Vision-Language Models with Minimal Contrastive Images
Shengguang Wu, Fan-Yun Sun, Kaiyue Wen, Nick Haber
2025-02-21

Summary
This paper talks about a new method called S-VCO (Symmetrical Visual Contrastive Optimization) that helps AI models better understand and describe images by focusing on important visual details.
What's the problem?
Current AI models that work with both images and text (called Vision-Language Models or VLMs) often ignore important details in images and rely too much on their knowledge of language. This leads to mistakes when describing images and sometimes making up information that isn't actually there.
What's the solution?
The researchers created S-VCO, a new way to train these AI models that makes them pay more attention to specific details in images. They also made a special dataset called MVC with pairs of similar images and descriptions to challenge the AI and teach it to notice small but important differences. This helps the AI learn to connect the right words with the right parts of an image.
Why it matters?
This matters because it makes AI better at understanding and describing images accurately, which is important for many real-world applications. The new method reduced mistakes by up to 22% and improved the AI's performance on tasks that heavily rely on visual information. This could lead to more reliable AI assistants, better image search engines, and improved accessibility tools for visually impaired people.
Abstract
Recent studies have shown that Large Vision-Language Models (VLMs) tend to neglect image content and over-rely on language-model priors, resulting in errors in visually grounded tasks and hallucinations. We hypothesize that this issue arises because existing VLMs are not explicitly trained to generate texts that are accurately grounded in fine-grained image details. To enhance visual feedback during VLM training, we propose S-VCO (Symmetrical Visual Contrastive Optimization), a novel finetuning objective that steers the model toward capturing important visual details and aligning them with corresponding text tokens. To further facilitate this detailed alignment, we introduce MVC, a paired image-text dataset built by automatically filtering and augmenting visual counterfactual data to challenge the model with hard contrastive cases involving Minimal Visual Contrasts. Experiments show that our method consistently improves VLM performance across diverse benchmarks covering various abilities and domains, achieving up to a 22% reduction in hallucinations, and significant gains in vision-centric and general tasks. Notably, these improvements become increasingly pronounced in benchmarks with higher visual dependency. In short, S-VCO offers a significant enhancement of VLM's visually-dependent task performance while retaining or even improving the model's general abilities. We opensource our code at https://s-vco.github.io/