VisionArena: 230K Real World User-VLM Conversations with Preference Labels
Christopher Chou, Lisa Dunlap, Koki Mashita, Krishna Mandal, Trevor Darrell, Ion Stoica, Joseph E. Gonzalez, Wei-Lin Chiang
2024-12-13
Summary
This paper discusses VisionArena, a new dataset containing 230,000 real conversations between users and vision-language models (VLMs), aimed at improving how these AI systems interact with people.
What's the problem?
As vision-language models become more popular and advanced, there is a need for reliable data that reflects real user interactions. Existing datasets often lack the scale and diversity needed to effectively evaluate and improve these models, making it difficult to understand how well they perform in real-world scenarios.
What's the solution?
VisionArena addresses this problem by collecting a large dataset of conversations from an open-source platform called Chatbot Arena, where users interact with various VLMs. The dataset includes three parts: VisionArena-Chat with 200,000 conversations, VisionArena-Battle with 30,000 comparisons of two VLMs, and VisionArena-Bench with 500 prompts for benchmarking. This structure allows researchers to analyze user preferences, question types, and model performance across different tasks.
Why it matters?
This research is important because it provides valuable insights into how users interact with VLMs and highlights areas where these models can improve. By understanding user preferences and the challenges faced by VLMs, developers can create better AI systems that respond more effectively to human needs. This dataset will help advance the field of AI by enabling more accurate evaluations and improvements in vision-language models.
Abstract
With the growing adoption and capabilities of vision-language models (VLMs) comes the need for benchmarks that capture authentic user-VLM interactions. In response, we create VisionArena, a dataset of 230K real-world conversations between users and VLMs. Collected from Chatbot Arena - an open-source platform where users interact with VLMs and submit preference votes - VisionArena spans 73K unique users, 45 VLMs, and 138 languages. Our dataset contains three subsets: VisionArena-Chat, 200k single and multi-turn conversations between a user and a VLM; VisionArena-Battle, 30K conversations comparing two anonymous VLMs with user preference votes; and VisionArena-Bench, an automatic benchmark of 500 diverse user prompts that efficiently approximate the live Chatbot Arena model rankings. Additionally, we highlight the types of question asked by users, the influence of response style on preference, and areas where models often fail. We find open-ended tasks like captioning and humor are highly style-dependent, and current VLMs struggle with spatial reasoning and planning tasks. Lastly, we show finetuning the same base model on VisionArena-Chat outperforms Llava-Instruct-158K, with a 17-point gain on MMMU and a 46-point gain on the WildVision benchmark. Dataset at https://huggingface.co/lmarena-ai