Multimodal Inconsistency Reasoning (MMIR): A New Benchmark for Multimodal Reasoning Models
Qianqi Yan, Yue Fan, Hongquan Li, Shan Jiang, Yang Zhao, Xinze Guan, Ching-Chen Kuo, Xin Eric Wang
2025-02-25
Summary
This paper talks about MMIR, a new test designed to check how well AI models can spot and understand mistakes or inconsistencies in content that combines text and images, like websites or presentation slides
What's the problem?
Current AI models that work with both text and images are usually trained and tested on content where the text and images match perfectly. This doesn't reflect real-world situations where there might be mistakes or mismatches between what's written and what's shown in pictures. We don't know if these AI models can handle these kinds of inconsistencies
What's the solution?
The researchers created MMIR, a set of 534 challenging examples that include different types of deliberately inserted errors. These errors fall into five categories that require complex reasoning to spot. They then tested six advanced AI models using MMIR to see how well they could identify these inconsistencies. They also tried different ways of giving instructions to the AI to see if it would help them perform better
Why it matters?
This matters because as AI becomes more involved in tasks like checking websites or helping create presentations, it needs to be able to spot mistakes or inconsistencies between text and images. MMIR helps us understand where current AI models struggle with this task, which can guide future research to make AI better at understanding and working with content that combines text and images in complex ways. This could lead to more reliable and helpful AI assistants for a variety of tasks involving visual and textual information
Abstract
Existing Multimodal Large Language Models (MLLMs) are predominantly trained and tested on consistent visual-textual inputs, leaving open the question of whether they can handle inconsistencies in real-world, layout-rich content. To bridge this gap, we propose the Multimodal Inconsistency Reasoning (MMIR) benchmark to assess MLLMs' ability to detect and reason about semantic mismatches in artifacts such as webpages, presentation slides, and posters. MMIR comprises 534 challenging samples, each containing synthetically injected errors across five reasoning-heavy categories: Factual Contradiction, Identity Misattribution, Contextual Mismatch, Quantitative Discrepancy, and Temporal/Spatial Incoherence. We evaluate six state-of-the-art MLLMs, showing that models with dedicated multimodal reasoning capabilities, such as o1, substantially outperform their counterparts while open-source models remain particularly vulnerable to inconsistency errors. Detailed error analyses further show that models excel in detecting inconsistencies confined to a single modality, particularly in text, but struggle with cross-modal conflicts and complex layouts. Probing experiments reveal that single-modality prompting, including Chain-of-Thought (CoT) and Set-of-Mark (SoM) methods, yields marginal gains, revealing a key bottleneck in cross-modal reasoning. Our findings highlight the need for advanced multimodal reasoning and point to future research on multimodal inconsistency.