< Explain other AI papers

WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences

Yujie Lu, Dongfu Jiang, Wenhu Chen, William Yang Wang, Yejin Choi, Bill Yuchen Lin

2024-06-18

WildVision: Evaluating Vision-Language Models in the Wild with Human Preferences

Summary

This paper discusses WildVision, a project that evaluates vision-language models (VLMs) using human preferences in real-world situations. It introduces an online platform called WildVision-Arena for collecting user feedback to improve how these models perform.

What's the problem?

As vision-language models become more advanced, there is a growing need to assess how well they understand and interact with both images and text in real-life scenarios. Existing evaluation methods often do not reflect actual human preferences or the complexities of multimodal interactions, which can lead to models that do not perform well in practical applications.

What's the solution?

To tackle this issue, the authors created WildVision-Arena, an interactive platform where users can compare different VLMs by participating in multi-round chats involving images. Users vote on which model provides better responses, and this feedback helps create a benchmark called WV-Bench. This benchmark includes high-quality samples selected from thousands of user submissions and uses GPT-4 to evaluate the models' performance. The results show that this method significantly outperforms previous benchmarks in assessing model capabilities.

Why it matters?

This research is important because it provides a more accurate way to evaluate how well AI models understand and respond to visual and textual information. By focusing on human preferences and real-world interactions, WildVision aims to improve the development of VLMs, making them more effective for practical use in areas like customer service, education, and content creation. This advancement could lead to AI systems that better meet human needs and expectations.

Abstract

Recent breakthroughs in vision-language models (VLMs) emphasize the necessity of benchmarking human preferences in real-world multimodal interactions. To address this gap, we launched WildVision-Arena (WV-Arena), an online platform that collects human preferences to evaluate VLMs. We curated WV-Bench by selecting 500 high-quality samples from 8,000 user submissions in WV-Arena. WV-Bench uses GPT-4 as the judge to compare each VLM with Claude-3-Sonnet, achieving a Spearman correlation of 0.94 with the WV-Arena Elo. This significantly outperforms other benchmarks like MMVet, MMMU, and MMStar. Our comprehensive analysis of 20K real-world interactions reveals important insights into the failure cases of top-performing VLMs. For example, we find that although GPT-4V surpasses many other models like Reka-Flash, Opus, and Yi-VL-Plus in simple visual recognition and reasoning tasks, it still faces challenges with subtle contextual cues, spatial reasoning, visual imagination, and expert domain knowledge. Additionally, current VLMs exhibit issues with hallucinations and safety when intentionally provoked. We are releasing our chat and feedback data to further advance research in the field of VLMs.