MMMU-Pro: A More Robust Multi-discipline Multimodal Understanding Benchmark
Xiang Yue, Tianyu Zheng, Yuansheng Ni, Yubo Wang, Kai Zhang, Shengbang Tong, Yuxuan Sun, Ming Yin, Botao Yu, Ge Zhang, Huan Sun, Yu Su, Wenhu Chen, Graham Neubig
2024-09-05
Summary
This paper talks about MMMU-Pro, a new and improved benchmark designed to test how well multimodal models understand and reason with different types of information, like text and images.
What's the problem?
Current benchmarks for evaluating multimodal models often allow models to answer questions using only text, which doesn't fully test their ability to integrate visual and textual information. This limitation means that we don't really know how well these models can 'see' and 'read' at the same time, which is a key skill for understanding complex information.
What's the solution?
MMMU-Pro improves upon previous benchmarks by filtering out questions that can be answered just with text, adding more options for answers, and introducing a setting where questions are embedded within images. This approach forces the AI to truly combine what it sees with what it reads. The results showed that models performed significantly worse on MMMU-Pro compared to earlier benchmarks, indicating that it is a tougher test that better reflects real-world challenges.
Why it matters?
This research is important because it provides a more realistic way to evaluate how well AI can understand and reason about complex information from multiple sources. By creating a benchmark that closely mimics real-life scenarios, MMMU-Pro can help guide future improvements in AI technology, making it more capable of handling diverse tasks.
Abstract
This paper introduces MMMU-Pro, a robust version of the Massive Multi-discipline Multimodal Understanding and Reasoning (MMMU) benchmark. MMMU-Pro rigorously assesses multimodal models' true understanding and reasoning capabilities through a three-step process based on MMMU: (1) filtering out questions answerable by text-only models, (2) augmenting candidate options, and (3) introducing a vision-only input setting where questions are embedded within images. This setting challenges AI to truly "see" and "read" simultaneously, testing a fundamental human cognitive skill of seamlessly integrating visual and textual information. Results show that model performance is substantially lower on MMMU-Pro than on MMMU, ranging from 16.8% to 26.9% across models. We explore the impact of OCR prompts and Chain of Thought (CoT) reasoning, finding that OCR prompts have minimal effect while CoT generally improves performance. MMMU-Pro provides a more rigorous evaluation tool, closely mimicking real-world scenarios and offering valuable directions for future research in multimodal AI.