Why Reasoning Matters? A Survey of Advancements in Multimodal Reasoning (v1)
Jing Bi, Susan Liang, Xiaofei Zhou, Pinxin Liu, Junjia Guo, Yunlong Tang, Luchuan Song, Chao Huang, Guangyu Sun, Jinxi He, Jiarui Wu, Shu Yang, Daoan Zhang, Chen Chen, Lianggong Bruce Wen, Zhang Liu, Jiebo Luo, Chenliang Xu
2025-04-08
Summary
This paper talks about why teaching AI to think logically using both pictures and words is super important, like helping a robot understand a comic book by looking at images and reading speech bubbles at the same time.
What's the problem?
Current AI struggles to connect information from different sources (like photos and text) properly, sometimes getting confused when details don't match up, which leads to wrong answers.
What's the solution?
Researchers are improving AI's thinking skills by using better training methods that help it combine visual and text clues correctly, along with new ways to test if its reasoning makes sense.
Why it matters?
Making AI smarter at combining different information types helps create better tools for things like medical diagnosis (reading scans and patient notes) or educational apps that explain diagrams and text together.
Abstract
Reasoning is central to human intelligence, enabling structured problem-solving across diverse tasks. Recent advances in large language models (LLMs) have greatly enhanced their reasoning abilities in arithmetic, commonsense, and symbolic domains. However, effectively extending these capabilities into multimodal contexts-where models must integrate both visual and textual inputs-continues to be a significant challenge. Multimodal reasoning introduces complexities, such as handling conflicting information across modalities, which require models to adopt advanced interpretative strategies. Addressing these challenges involves not only sophisticated algorithms but also robust methodologies for evaluating reasoning accuracy and coherence. This paper offers a concise yet insightful overview of reasoning techniques in both textual and multimodal LLMs. Through a thorough and up-to-date comparison, we clearly formulate core reasoning challenges and opportunities, highlighting practical methods for post-training optimization and test-time inference. Our work provides valuable insights and guidance, bridging theoretical frameworks and practical implementations, and sets clear directions for future research.