MRMR: A Realistic and Expert-Level Multidisciplinary Benchmark for Reasoning-Intensive Multimodal Retrieval
Siyue Zhang, Yuan Gao, Xiao Zhou, Yilun Zhao, Tingyu Song, Arman Cohan, Anh Tuan Luu, Chen Zhao
2025-10-13
Summary
This paper introduces a new benchmark called MRMR, designed to really test how well computers can search and understand information from both images and text together, requiring a high level of reasoning.
What's the problem?
Existing benchmarks for searching images and text weren't challenging enough. They often focused on simple matching or only used one type of information at a time, like just images or just text. They also didn't cover a wide enough range of specialized knowledge areas or require complex thinking to find the right answers. Current systems struggle with tasks needing deeper understanding, like interpreting medical images or spotting contradictions in information.
What's the solution?
The researchers created MRMR, a benchmark with over 1,500 complex questions spanning 23 different fields. These questions require systems to analyze both images and text, often in a sequence, and even identify conflicting information. They tested several existing search systems on MRMR and found that a model combining a strong text understanding component with image captions generated by a large language model performed the best, but even that model had room for improvement. They also introduced a new task called 'Contradiction Retrieval' to specifically test a model's ability to find conflicting concepts.
Why it matters?
MRMR provides a more realistic and difficult test for multimodal retrieval systems – systems that search using both images and text. By pushing these systems to perform better on complex reasoning tasks, this work helps pave the way for more advanced AI that can understand and process information like humans do, which is crucial for applications like medical diagnosis, scientific research, and more.
Abstract
We introduce MRMR, the first expert-level multidisciplinary multimodal retrieval benchmark requiring intensive reasoning. MRMR contains 1,502 queries spanning 23 domains, with positive documents carefully verified by human experts. Compared to prior benchmarks, MRMR introduces three key advancements. First, it challenges retrieval systems across diverse areas of expertise, enabling fine-grained model comparison across domains. Second, queries are reasoning-intensive, with images requiring deeper interpretation such as diagnosing microscopic slides. We further introduce Contradiction Retrieval, a novel task requiring models to identify conflicting concepts. Finally, queries and documents are constructed as image-text interleaved sequences. Unlike earlier benchmarks restricted to single images or unimodal documents, MRMR offers a realistic setting with multi-image queries and mixed-modality corpus documents. We conduct an extensive evaluation of 4 categories of multimodal retrieval systems and 14 frontier models on MRMR. The text embedding model Qwen3-Embedding with LLM-generated image captions achieves the highest performance, highlighting substantial room for improving multimodal retrieval models. Although latest multimodal models such as Ops-MM-Embedding perform competitively on expert-domain queries, they fall short on reasoning-intensive tasks. We believe that MRMR paves the way for advancing multimodal retrieval in more realistic and challenging scenarios.