< Explain other AI papers

Cambrian-S: Towards Spatial Supersensing in Video

Shusheng Yang, Jihan Yang, Pinzhi Huang, Ellis Brown, Zihao Yang, Yue Yu, Shengbang Tong, Zihan Zheng, Yifan Xu, Muhan Wang, Daohan Lu, Rob Fergus, Yann LeCun, Li Fei-Fei, Saining Xie

2025-11-07

Cambrian-S: Towards Spatial Supersensing in Video

Summary

This paper argues that building truly intelligent systems that can understand the world like humans do requires moving beyond simply reacting to tasks and processing huge amounts of data. Instead, it proposes a new approach called 'supersensing,' which focuses on how a system actively perceives and understands its surroundings over time.

What's the problem?

Current AI models, especially those dealing with video and images, are good at identifying *what* they see, but struggle with understanding *where* things are in space, how events unfold over time, and predicting what might happen next. Existing tests don't really challenge AI to build a complete, internal 'world model' – they mostly test basic recognition. Simply making models bigger and feeding them more data isn't enough to solve this problem; they need a more sophisticated way to process spatial information.

What's the solution?

The researchers created a new set of tests, called VSI-SUPER, designed to specifically evaluate a system’s ability to remember visual information over long periods and continuously count objects in a video. They also created a large dataset, VSI-590K, to train AI models on these tasks. They found that while increasing the size of the model helped, it wasn’t enough. Then, they experimented with a system that tries to *predict* what will happen next in a video, using prediction errors to focus its attention and build a better understanding of events.

Why it matters?

This work is important because it points the way towards AI that can truly understand and interact with the physical world. Instead of just recognizing objects, these systems would be able to anticipate events, build internal maps, and reason about space and time, much like humans do. This is a crucial step towards creating more robust and adaptable AI for applications like robotics, self-driving cars, and virtual reality.

Abstract

We argue that progress in true multimodal intelligence calls for a shift from reactive, task-driven systems and brute-force long context towards a broader paradigm of supersensing. We frame spatial supersensing as four stages beyond linguistic-only understanding: semantic perception (naming what is seen), streaming event cognition (maintaining memory across continuous experiences), implicit 3D spatial cognition (inferring the world behind pixels), and predictive world modeling (creating internal models that filter and organize information). Current benchmarks largely test only the early stages, offering narrow coverage of spatial cognition and rarely challenging models in ways that require true world modeling. To drive progress in spatial supersensing, we present VSI-SUPER, a two-part benchmark: VSR (long-horizon visual spatial recall) and VSC (continual visual spatial counting). These tasks require arbitrarily long video inputs yet are resistant to brute-force context expansion. We then test data scaling limits by curating VSI-590K and training Cambrian-S, achieving +30% absolute improvement on VSI-Bench without sacrificing general capabilities. Yet performance on VSI-SUPER remains limited, indicating that scale alone is insufficient for spatial supersensing. We propose predictive sensing as a path forward, presenting a proof-of-concept in which a self-supervised next-latent-frame predictor leverages surprise (prediction error) to drive memory and event segmentation. On VSI-SUPER, this approach substantially outperforms leading proprietary baselines, showing that spatial supersensing requires models that not only see but also anticipate, select, and organize experience.