< Explain other AI papers

Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models

Xiaoyu Zhan, Wenxuan Huang, Hao Sun, Xinyu Fu, Changfeng Ma, Shaosheng Cao, Bohan Jia, Shaohui Lin, Zhenfei Yin, Lei Bai, Wanli Ouyang, Yuanqi Li, Jie Guo, Yanwen Guo

2025-11-04

Actial: Activate Spatial Reasoning Ability of Multimodal Large Language Models

Summary

This paper investigates whether advanced AI models that can understand both images and text, called Multimodal Large Language Models (MLLMs), can truly grasp 3D space and reason about objects from different viewpoints. It focuses on whether these models can consistently understand how an object looks from various angles, which is crucial for real-world applications.

What's the problem?

While MLLMs are getting better at understanding pictures, it's not clear if they can handle the complex spatial information needed for accurate 3D reasoning. Specifically, they might struggle with 'cross-view consistency' – meaning they might not recognize the same object as being the same when viewed from different positions. This is a big problem because understanding 3D space is essential for things like robots navigating the world or self-driving cars.

What's the solution?

The researchers created a new task and dataset called 'Viewpoint Learning' with 100,000 pairs of images showing objects from different angles, along with questions and answers about those images. They then trained the MLLM in two steps: first, they gave it a solid foundation of spatial knowledge using the new dataset, and second, they used a technique called Reinforcement Learning to help it generalize this knowledge to new, unseen situations. They also developed a clever way to start the learning process that helps the model learn about viewpoints while still making logical sense.

Why it matters?

This work shows that specifically training MLLMs to understand spatial relationships and viewpoints significantly improves their 3D reasoning abilities. This is a crucial step towards building AI systems that can interact with the physical world effectively, with potential applications in robotics, self-driving technology, and generally helping computers understand 3D environments like humans do.

Abstract

Recent advances in Multimodal Large Language Models (MLLMs) have significantly improved 2D visual understanding, prompting interest in their application to complex 3D reasoning tasks. However, it remains unclear whether these models can effectively capture the detailed spatial information required for robust real-world performance, especially cross-view consistency, a key requirement for accurate 3D reasoning. Considering this issue, we introduce Viewpoint Learning, a task designed to evaluate and improve the spatial reasoning capabilities of MLLMs. We present the Viewpoint-100K dataset, consisting of 100K object-centric image pairs with diverse viewpoints and corresponding question-answer pairs. Our approach employs a two-stage fine-tuning strategy: first, foundational knowledge is injected to the baseline MLLM via Supervised Fine-Tuning (SFT) on Viewpoint-100K, resulting in significant improvements across multiple tasks; second, generalization is enhanced through Reinforcement Learning using the Group Relative Policy Optimization (GRPO) algorithm on a broader set of questions. Additionally, we introduce a hybrid cold-start initialization method designed to simultaneously learn viewpoint representations and maintain coherent reasoning thinking. Experimental results show that our approach significantly activates the spatial reasoning ability of MLLM, improving performance on both in-domain and out-of-domain reasoning tasks. Our findings highlight the value of developing foundational spatial skills in MLLMs, supporting future progress in robotics, autonomous systems, and 3D scene understanding.