< Explain other AI papers

Unified 4D World Action Modeling from Video Priors with Asynchronous Denoising

Jun Guo, Qiwei Li, Peiyan Li, Zilong Chen, Nan Sun, Yifei Su, Heyun Wang, Yuan Zhang, Xinghang Li, Huaping Liu

2026-04-30

Unified 4D World Action Modeling from Video Priors with Asynchronous Denoising

Summary

This paper introduces X-WAM, a new system that allows robots to both understand the world around them in 3D and take actions in that world at the same time, all within a single system.

What's the problem?

Previous attempts at creating systems that combine world understanding and action planning often focused on just 2D images, which isn't enough for a robot to navigate and interact with a 3D environment effectively. These systems also struggled to balance being fast enough for real-time control with creating a detailed and accurate model of the world.

What's the solution?

The researchers developed X-WAM, which predicts what the world will look like in the future as a multi-view RGB-D video (color and depth information). It cleverly reuses parts of existing AI models designed for creating videos to efficiently build a 3D understanding of the environment. A key technique called Asynchronous Noise Sampling allows the system to quickly plan actions while still generating high-quality video and 3D reconstructions. Essentially, it uses fewer steps for quick action planning and more steps for detailed world modeling.

Why it matters?

This work is important because it represents a significant step towards robots that can truly understand and interact with their surroundings in a realistic way. By combining accurate 3D world modeling with efficient action planning, X-WAM achieves better performance on standard robotic tasks and produces more realistic simulations of the robot's environment, paving the way for more capable and adaptable robots.

Abstract

We propose X-WAM, a Unified 4D World Model that unifies real-time robotic action execution and high-fidelity 4D world synthesis (video + 3D reconstruction) in a single framework, addressing the critical limitations of prior unified world models (e.g., UWM) that only model 2D pixel-space and fail to balance action efficiency and world modeling quality. To leverage the strong visual priors of pretrained video diffusion models, X-WAM imagines the future world by predicting multi-view RGB-D videos, and obtains spatial information efficiently through a lightweight structural adaptation: replicating the final few blocks of the pretrained Diffusion Transformer into a dedicated depth prediction branch for the reconstruction of future spatial information. Moreover, we propose Asynchronous Noise Sampling (ANS) to jointly optimize generation quality and action decoding efficiency. ANS applies a specialized asynchronous denoising schedule during inference, which rapidly decodes actions with fewer steps to enable efficient real-time execution, while dedicating the full sequence of steps to generate high-fidelity video. Rather than entirely decoupling the timesteps during training, ANS samples from their joint distribution to align with the inference distribution. Pretrained on over 5,800 hours of robotic data, X-WAM achieves 79.2% and 90.7% average success rate on RoboCasa and RoboTwin 2.0 benchmarks, while producing high-fidelity 4D reconstruction and generation surpassing existing methods in both visual and geometric metrics.