UNIDOC-BENCH: A Unified Benchmark for Document-Centric Multimodal RAG
Xiangyu Peng, Cab Qin, Zeyuan Chen, Ran Xu, Caiming Xiong, Chien-Sheng Wu
2025-10-10
Summary
This paper introduces a new way to test how well computer systems can answer questions using both text and images from real-world documents, like PDFs. It focuses on a technique called Multimodal Retrieval-Augmented Generation (MM-RAG), which combines the power of large language models with information retrieved from various sources.
What's the problem?
Currently, there isn't a good, standardized way to evaluate how well these MM-RAG systems work when dealing with complex, real-world documents that contain both text, tables, and figures. Existing tests either focus on just text or just images, or they use simplified scenarios that don't reflect how people actually use documents. This makes it hard to compare different systems and understand their strengths and weaknesses.
What's the solution?
The researchers created a large dataset called UniDoc-Bench, consisting of 70,000 pages from real PDFs across eight different subject areas. They then wrote 1,600 questions that require using information from both the text and images within these documents to answer. They also made sure the questions were reliable by having multiple people check them. Finally, they used this dataset to compare four different approaches to MM-RAG, keeping everything else consistent to ensure a fair comparison.
Why it matters?
This work is important because it provides a realistic and standardized benchmark for evaluating MM-RAG systems. The results show that combining text and images is better than using either one alone, but current methods for combining them aren't perfect. The research also helps pinpoint where these systems struggle and provides guidance for building better systems that can effectively use both text and visual information to answer questions.
Abstract
Multimodal retrieval-augmented generation (MM-RAG) is a key approach for applying large language models (LLMs) and agents to real-world knowledge bases, yet current evaluations are fragmented, focusing on either text or images in isolation or on simplified multimodal setups that fail to capture document-centric multimodal use cases. In this paper, we introduce UniDoc-Bench, the first large-scale, realistic benchmark for MM-RAG built from 70k real-world PDF pages across eight domains. Our pipeline extracts and links evidence from text, tables, and figures, then generates 1,600 multimodal QA pairs spanning factual retrieval, comparison, summarization, and logical reasoning queries. To ensure reliability, 20% of QA pairs are validated by multiple annotators and expert adjudication. UniDoc-Bench supports apples-to-apples comparison across four paradigms: (1) text-only, (2) image-only, (3) multimodal text-image fusion, and (4) multimodal joint retrieval -- under a unified protocol with standardized candidate pools, prompts, and evaluation metrics. Our experiments show that multimodal text-image fusion RAG systems consistently outperform both unimodal and jointly multimodal embedding-based retrieval, indicating that neither text nor images alone are sufficient and that current multimodal embeddings remain inadequate. Beyond benchmarking, our analysis reveals when and how visual context complements textual evidence, uncovers systematic failure modes, and offers actionable guidance for developing more robust MM-RAG pipelines.