Spatial Forcing: Implicit Spatial Representation Alignment for Vision-language-action Model
Fuhao Li, Wenxuan Song, Han Zhao, Jingbo Wang, Pengxiang Ding, Donglin Wang, Long Zeng, Haoang Li
2025-10-15
Summary
This paper introduces a new technique called Spatial Forcing to help robots better understand and act on instructions given in everyday language, specifically focusing on improving their ability to work in a 3D world.
What's the problem?
Current robots using vision and language often struggle with tasks requiring a good sense of space because they're typically trained using only 2D images. While adding 3D sensors like depth cameras helps, these sensors can be noisy or incomplete, and it's hard to get consistent data across different robots. Trying to *guess* 3D information from 2D images isn't very accurate either, limiting how well robots can perform actions in the real world.
What's the solution?
The researchers developed Spatial Forcing, which doesn't rely on directly using 3D sensors or trying to estimate depth. Instead, it subtly guides the robot's internal understanding of images to align with how a pre-trained 3D model 'sees' the world. Essentially, it forces the robot to develop a better spatial awareness by comparing its image processing to a model already good at understanding 3D shapes and layouts, improving the robot's precision when performing actions.
Why it matters?
This work is important because it allows robots to follow instructions more accurately in real-world environments without needing expensive or complicated 3D sensors. It also makes training these robots faster and more efficient, meaning they can learn new tasks with less data, opening the door to more capable and adaptable robots for various applications.
Abstract
Vision-language-action (VLA) models have recently shown strong potential in enabling robots to follow language instructions and execute precise actions. However, most VLAs are built upon vision-language models pretrained solely on 2D data, which lack accurate spatial awareness and hinder their ability to operate in the 3D physical world. Existing solutions attempt to incorporate explicit 3D sensor inputs such as depth maps or point clouds, but these approaches face challenges due to sensor noise, hardware heterogeneity, and incomplete depth coverage in existing datasets. Alternative methods that estimate 3D cues from 2D images also suffer from the limited performance of depth estimators.We propose Spatial Forcing (SF), a simple yet effective alignment strategy that implicitly forces VLA models to develop spatial comprehension capabilities without relying on explicit 3D inputs or depth estimators. SF aligns intermediate visual embeddings of VLAs with geometric representations produced by pretrained 3D foundation models. By enforcing alignment at intermediate layers, SF guides VLAs to encode richer spatial representations that enhance action precision.Extensive experiments in simulation and real-world environments demonstrate that SF achieves state-of-the-art results, surpassing both 2D- and 3D-based VLAs. SF further accelerates training by up to 3.8x and improves data efficiency across diverse robotic tasks. Project page is at https://spatial-forcing.github.io/