< Explain other AI papers

Think with 3D: Geometric Imagination Grounded Spatial Reasoning from Limited Views

Zhangquan Chen, Manyuan Zhang, Xinlei Yu, Xufang Luo, Mingze Sun, Zihao Pan, Yan Feng, Peng Pei, Xunliang Cai, Ruqi Huang

2025-10-22

Think with 3D: Geometric Imagination Grounded Spatial Reasoning from Limited Views

Summary

This paper introduces a new system called 3DThinker that helps computers better understand the 3D relationships between objects in images, even when they only see limited views of those objects.

What's the problem?

Current computer systems struggle with understanding 3D space from just a few pictures. They usually rely on either just reading text descriptions or looking at 2D visual clues, but these methods aren't very good at tasks that require imagining what things look like in three dimensions. It's hard for them to 'think' in 3D without being specifically trained with lots of 3D data.

What's the solution?

The researchers created 3DThinker, which allows the computer to build a 3D 'mental image' while it's reasoning about a scene. It does this by learning to connect the 3D information it gets from images with a more complete 3D understanding from another model. First, it learns to align its internal 3D understanding with a pre-existing 3D model, and then it refines this understanding based on whether its reasoning leads to the correct outcome. Importantly, it doesn't need a lot of pre-labeled 3D data to learn.

Why it matters?

This work is important because it shows a new way for computers to understand 3D space without needing explicit 3D training data. It improves performance on tasks requiring 3D reasoning and brings us closer to creating AI systems that can understand the world more like humans do, by combining visual information with spatial imagination.

Abstract

Though recent advances in vision-language models (VLMs) have achieved remarkable progress across a wide range of multimodal tasks, understanding 3D spatial relationships from limited views remains a significant challenge. Previous reasoning methods typically rely on pure text (e.g., topological cognitive maps) or on 2D visual cues. However, their limited representational capacity hinders performance in specific tasks that require 3D spatial imagination. To address this limitation, we propose 3DThinker, a framework that can effectively exploits the rich geometric information embedded within images while reasoning, like humans do. Our framework is the first to enable 3D mentaling during reasoning without any 3D prior input, and it does not rely on explicitly labeled 3D data for training. Specifically, our training consists of two stages. First, we perform supervised training to align the 3D latent generated by VLM while reasoning with that of a 3D foundation model (e.g., VGGT). Then, we optimize the entire reasoning trajectory solely based on outcome signals, thereby refining the underlying 3D mentaling. Extensive experiments across multiple benchmarks show that 3DThinker consistently outperforms strong baselines and offers a new perspective toward unifying 3D representations into multimodal reasoning. Our code will be available at https://github.com/zhangquanchen/3DThinker.