< Explain other AI papers

Scaling Spatial Intelligence with Multimodal Foundation Models

Zhongang Cai, Ruisi Wang, Chenyang Gu, Fanyi Pu, Junxiang Xu, Yubo Wang, Wanqi Yin, Zhitao Yang, Chen Wei, Qingping Sun, Tongxi Zhou, Jiaqi Li, Hui En Pang, Oscar Qian, Yukun Wei, Zhiqian Lin, Xuanke Shi, Kewang Deng, Xiaoyang Han, Zukai Chen, Xiangyu Fan, Hanming Deng

2025-11-21

Scaling Spatial Intelligence with Multimodal Foundation Models

Summary

This paper focuses on improving how well artificial intelligence models understand and reason about spatial relationships – things like where objects are in relation to each other, and how they fit together. They introduce a new family of models called SenseNova-SI designed to be better at these kinds of tasks.

What's the problem?

Current AI models, even those that are very good at understanding both images and text, often struggle with spatial intelligence. They might misinterpret how objects are positioned, or have trouble solving puzzles that require understanding 3D space. This limits their ability to perform tasks that require real-world understanding, like robotics or even interpreting complex diagrams.

What's the solution?

The researchers created SenseNova-SI by building upon existing AI models and then training them with a massive dataset of eight million examples specifically designed to test and improve spatial reasoning. This dataset, called SenseNova-SI-8M, includes a wide variety of spatial challenges. They also analyzed how much data was needed, looked for potential problems like the model memorizing answers instead of truly understanding, and explored how the model could 'think through' spatial problems step-by-step.

Why it matters?

This work is important because it shows that we *can* significantly improve AI’s spatial intelligence by focusing on the right kind of data and training. Better spatial reasoning will allow AI to be used in more practical applications, like helping robots navigate the world, assisting in design and engineering, and even improving how we interact with virtual reality. The researchers are also making their models publicly available so other scientists can build on their work.

Abstract

Despite remarkable progress, multimodal foundation models still exhibit surprising deficiencies in spatial intelligence. In this work, we explore scaling up multimodal foundation models to cultivate spatial intelligence within the SenseNova-SI family, built upon established multimodal foundations including visual understanding models (i.e., Qwen3-VL and InternVL3) and unified understanding and generation models (i.e., Bagel). We take a principled approach to constructing high-performing and robust spatial intelligence by systematically curating SenseNova-SI-8M: eight million diverse data samples under a rigorous taxonomy of spatial capabilities. SenseNova-SI demonstrates unprecedented performance across a broad range of spatial intelligence benchmarks: 68.7% on VSI-Bench, 43.3% on MMSI, 85.6% on MindCube, 54.6% on ViewSpatial, and 50.1% on SITE, while maintaining strong general multimodal understanding (e.g., 84.9% on MMBench-En). More importantly, we analyze the impact of data scaling, discuss early signs of emergent generalization capabilities enabled by diverse data training, analyze the risk of overfitting and language shortcuts, present a preliminary study on spatial chain-of-thought reasoning, and validate the potential downstream application. SenseNova-SI is an ongoing project, and this report will be updated continuously. All newly trained multimodal foundation models are publicly released to facilitate further research in this direction.