< Explain other AI papers

G^2VLM: Geometry Grounded Vision Language Model with Unified 3D Reconstruction and Spatial Reasoning

Wenbo Hu, Jingli Lin, Yilin Long, Yunlong Ran, Lihan Jiang, Yifan Wang, Chenming Zhu, Runsen Xu, Tai Wang, Jiangmiao Pang

2025-11-27

G^2VLM: Geometry Grounded Vision Language Model with Unified 3D Reconstruction and Spatial Reasoning

Summary

This paper introduces a new type of vision-language model, called G^2VLM, that's better at understanding and reasoning about spatial relationships in images and videos.

What's the problem?

Current vision-language models struggle with tasks that require understanding where things are in 3D space. They're good at recognizing objects, but not so good at figuring out how those objects relate to each other in a three-dimensional way, like understanding depth or how things are positioned relative to each other. This is because they don't really 'learn' about the underlying 3D geometry of a scene.

What's the solution?

The researchers created G^2VLM, which is designed to learn about 3D geometry directly from images and videos. It essentially tries to reconstruct a 3D understanding of the scene, and then uses that understanding to improve its ability to answer questions and solve problems about spatial relationships. It's trained on lots of image and video data, and it combines this with a built-in understanding of how 3D scenes generally work.

Why it matters?

This work is important because it moves vision-language models closer to having true spatial intelligence. By giving these models a better grasp of 3D space, it opens up possibilities for more advanced applications, like being able to edit 3D scenes or create more realistic virtual environments. It also provides a strong starting point for other researchers to build upon.

Abstract

Vision-Language Models (VLMs) still lack robustness in spatial intelligence, demonstrating poor performance on spatial understanding and reasoning tasks. We attribute this gap to the absence of a visual geometry learning process capable of reconstructing 3D space from 2D images. We present G^2VLM, a geometry grounded vision-language model that bridges two fundamental aspects of spatial intelligence: spatial 3D reconstruction and spatial understanding. G^2VLM natively leverages learned 3D visual geometry features to directly predict 3D attributes and enhance spatial reasoning tasks via in-context learning and interleaved reasoning. Our unified design is highly scalable for spatial understanding: it trains on abundant multi-view image and video data, while simultaneously leveraging the benefits of 3D visual priors that are typically only derived from hard-to-collect annotations. Experimental results demonstrate G^2VLM is proficient in both tasks, achieving comparable results to state-of-the-art feed-forward 3D reconstruction models and achieving better or competitive results across spatial understanding and reasoning tasks. By unifying a semantically strong VLM with low-level 3D vision tasks, we hope G^2VLM can serve as a strong baseline for the community and unlock more future applications, such as 3D scene editing.