< Explain other AI papers

Spatial-TTT: Streaming Visual-based Spatial Intelligence with Test-Time Training

Fangfu Liu, Diankun Wu, Jiawei Chi, Yimo Cai, Yi-Hsin Hung, Xumin Yu, Hao Li, Han Hu, Yongming Rao, Yueqi Duan

2026-03-13

Spatial-TTT: Streaming Visual-based Spatial Intelligence with Test-Time Training

Summary

This paper focuses on how computers can understand spaces like humans do, by watching videos over time. It's about building 'spatial intelligence' in machines, meaning their ability to grasp and remember the layout and relationships within a scene.

What's the problem?

The main challenge isn't just processing long videos, but figuring out *what* information about the space is important to keep track of and *how* to organize it in the computer's 'memory'. Existing methods struggle to maintain a consistent understanding of a space as you watch a video for a long time, because they either forget earlier details or get bogged down trying to process everything.

What's the solution?

The researchers developed a system called Spatial-TTT. It works by selectively updating a small part of the computer's 'brain' (called 'fast weights') while watching the video. This allows it to learn and remember important spatial information without needing to re-learn everything from scratch. They also designed a way for the system to predict how things will move and change in the scene, helping it understand the 3D structure and how objects relate to each other. Finally, they created a special dataset with detailed 3D descriptions to help train the system.

Why it matters?

This work is important because it improves a computer's ability to understand the world around it from video, which is crucial for things like robotics, self-driving cars, and even virtual reality. By efficiently processing and remembering spatial information, these systems can make better decisions and interact with the world more effectively.

Abstract

Humans perceive and understand real-world spaces through a stream of visual observations. Therefore, the ability to streamingly maintain and update spatial evidence from potentially unbounded video streams is essential for spatial intelligence. The core challenge is not simply longer context windows but how spatial information is selected, organized, and retained over time. In this paper, we propose Spatial-TTT towards streaming visual-based spatial intelligence with test-time training (TTT), which adapts a subset of parameters (fast weights) to capture and organize spatial evidence over long-horizon scene videos. Specifically, we design a hybrid architecture and adopt large-chunk updates parallel with sliding-window attention for efficient spatial video processing. To further promote spatial awareness, we introduce a spatial-predictive mechanism applied to TTT layers with 3D spatiotemporal convolution, which encourages the model to capture geometric correspondence and temporal continuity across frames. Beyond architecture design, we construct a dataset with dense 3D spatial descriptions, which guides the model to update its fast weights to memorize and organize global 3D spatial signals in a structured manner. Extensive experiments demonstrate that Spatial-TTT improves long-horizon spatial understanding and achieves state-of-the-art performance on video spatial benchmarks. Project page: https://liuff19.github.io/Spatial-TTT.