< Explain other AI papers

CRISP: Contact-Guided Real2Sim from Monocular Video with Planar Scene Primitives

Zihan Wang, Jiashun Wang, Jeff Tan, Yiwen Zhao, Jessica Hodgins, Shubham Tulsiani, Deva Ramanan

2025-12-17

CRISP: Contact-Guided Real2Sim from Monocular Video with Planar Scene Primitives

Summary

This paper introduces CRISP, a new technique for creating realistic, usable 3D environments and human movements directly from regular videos. It aims to build virtual worlds that robots or augmented/virtual reality applications can actually use, not just visually impressive but physically accurate ones.

What's the problem?

Existing methods for building 3D scenes and human motion from video often fall short. Some rely heavily on pre-existing data and don't consider the laws of physics, leading to unrealistic results. Others create messy, inaccurate 3D models with lots of errors, making it impossible for virtual characters to interact with the environment properly – imagine a robot trying to sit in a chair that isn't modeled correctly! Essentially, it's hard to get both accurate geometry *and* realistic movement at the same time.

What's the solution?

CRISP tackles this by first reconstructing a basic 3D scene from the video. Then, it simplifies this scene by fitting basic shapes like planes to the 3D data, creating a clean and accurate representation. It cleverly uses information about how humans interact with objects – like knowing a person sitting suggests there's a seat hidden from view – to fill in missing parts of the scene. Finally, it tests the accuracy of the reconstructed scene and motion by using it to control a virtual human, refining the process until the movements look and feel natural through a process called reinforcement learning.

Why it matters?

This work is important because it significantly improves the ability to create realistic simulations from real-world videos. This has huge implications for training robots to perform tasks in the real world without needing expensive and time-consuming manual setup. It also opens doors for more immersive and interactive augmented and virtual reality experiences, and even allows for working with AI-generated videos to create usable 3D environments.

Abstract

We introduce CRISP, a method that recovers simulatable human motion and scene geometry from monocular video. Prior work on joint human-scene reconstruction relies on data-driven priors and joint optimization with no physics in the loop, or recovers noisy geometry with artifacts that cause motion tracking policies with scene interactions to fail. In contrast, our key insight is to recover convex, clean, and simulation-ready geometry by fitting planar primitives to a point cloud reconstruction of the scene, via a simple clustering pipeline over depth, normals, and flow. To reconstruct scene geometry that might be occluded during interactions, we make use of human-scene contact modeling (e.g., we use human posture to reconstruct the occluded seat of a chair). Finally, we ensure that human and scene reconstructions are physically-plausible by using them to drive a humanoid controller via reinforcement learning. Our approach reduces motion tracking failure rates from 55.2\% to 6.9\% on human-centric video benchmarks (EMDB, PROX), while delivering a 43\% faster RL simulation throughput. We further validate it on in-the-wild videos including casually-captured videos, Internet videos, and even Sora-generated videos. This demonstrates CRISP's ability to generate physically-valid human motion and interaction environments at scale, greatly advancing real-to-sim applications for robotics and AR/VR.