MME-CoT: Benchmarking Chain-of-Thought in Large Multimodal Models for Reasoning Quality, Robustness, and Efficiency
Dongzhi Jiang, Renrui Zhang, Ziyu Guo, Yanwei Li, Yu Qi, Xinyan Chen, Liuhui Wang, Jianhan Jin, Claire Guo, Shen Yan, Bo Zhang, Chaoyou Fu, Peng Gao, Hongsheng Li
2025-02-14
Summary
This paper talks about MME-CoT, a benchmark designed to test how well large multimodal models (LMMs) can use Chain-of-Thought (CoT) reasoning to solve problems that involve both text and images.
What's the problem?
While CoT reasoning has improved the logical thinking of language models, it hasn't been studied much in multimodal models that combine vision and text. These models often struggle with tasks requiring detailed reasoning and sometimes overthink problems, especially when dealing with perception-heavy tasks like analyzing images.
What's the solution?
The researchers created MME-CoT, which evaluates LMMs across six areas: math, science, OCR (reading text in images), logic, space-time understanding, and general scene analysis. They introduced new metrics to measure reasoning quality, robustness, and efficiency. Through experiments, they found that models with reflection mechanisms perform better at CoT reasoning but are less efficient during response and self-correction phases.
Why it matters?
This matters because multimodal reasoning is essential for AI systems to handle complex real-world tasks that involve both text and images. MME-CoT helps identify strengths and weaknesses in current models, guiding improvements in their ability to think logically while processing diverse types of information.
Abstract
Answering questions with Chain-of-Thought (CoT) has significantly enhanced the reasoning capabilities of Large Language Models (LLMs), yet its impact on Large Multimodal Models (LMMs) still lacks a systematic assessment and in-depth investigation. In this paper, we introduce MME-CoT, a specialized benchmark evaluating the CoT reasoning performance of LMMs, spanning six domains: math, science, OCR, logic, space-time, and general scenes. As the first comprehensive study in this area, we propose a thorough evaluation suite incorporating three novel metrics that assess the reasoning quality, robustness, and efficiency at a fine-grained level. Leveraging curated high-quality data and a unique evaluation strategy, we conduct an in-depth analysis of state-of-the-art LMMs, uncovering several key insights: 1) Models with reflection mechanism demonstrate a superior CoT quality, with Kimi k1.5 outperforming GPT-4o and demonstrating the highest quality results; 2) CoT prompting often degrades LMM performance on perception-heavy tasks, suggesting a potentially harmful overthinking behavior; and 3) Although the CoT quality is high, LMMs with reflection exhibit significant inefficiency in both normal response and self-correction phases. We hope MME-CoT serves as a foundation for advancing multimodal reasoning in LMMs. Project Page: https://mmecot.github.io/