< Explain other AI papers

World Models That Know When They Don't Know: Controllable Video Generation with Calibrated Uncertainty

Zhiting Mei, Tenny Yin, Micah Baker, Ola Shorinwa, Anirudha Majumdar

2025-12-08

World Models That Know When They Don't Know: Controllable Video Generation with Calibrated Uncertainty

Summary

This paper focuses on making video-generating AI more reliable by teaching it to recognize when it's unsure about what it's creating, and to show us where those uncertainties are.

What's the problem?

Current AI models that create videos, especially those controlled by text or actions, are really good at making things *look* realistic, but they often 'hallucinate' – meaning they generate future frames that don't actually make sense in the real world. This is a big problem for things like training robots, because a robot acting on a hallucinated future could fail. The issue is these models don't have a way to tell us *how confident* they are in their predictions, so we can't easily fix these errors.

What's the solution?

The researchers developed a method called C3 that teaches video AI to estimate its own uncertainty. It does this in three main ways: first, it trains the AI to be both accurate *and* honest about its confidence using a special scoring system. Second, it figures out the uncertainty within the AI’s internal ‘thinking’ process, rather than trying to analyze the final video directly, which is more stable and efficient. Finally, it translates this internal uncertainty into a visual ‘heatmap’ on the video, highlighting areas where the AI is less sure of itself.

Why it matters?

This work is important because it allows us to build more trustworthy AI systems for video generation. By knowing when the AI is uncertain, we can either correct its mistakes, avoid using its predictions in critical situations (like robot control), or ask it to generate something different. This is a step towards making AI-generated videos more reliable and useful in real-world applications.

Abstract

Recent advances in generative video models have led to significant breakthroughs in high-fidelity video synthesis, specifically in controllable video generation where the generated video is conditioned on text and action inputs, e.g., in instruction-guided video editing and world modeling in robotics. Despite these exceptional capabilities, controllable video models often hallucinate - generating future video frames that are misaligned with physical reality - which raises serious concerns in many tasks such as robot policy evaluation and planning. However, state-of-the-art video models lack the ability to assess and express their confidence, impeding hallucination mitigation. To rigorously address this challenge, we propose C3, an uncertainty quantification (UQ) method for training continuous-scale calibrated controllable video models for dense confidence estimation at the subpatch level, precisely localizing the uncertainty in each generated video frame. Our UQ method introduces three core innovations to empower video models to estimate their uncertainty. First, our method develops a novel framework that trains video models for correctness and calibration via strictly proper scoring rules. Second, we estimate the video model's uncertainty in latent space, avoiding training instability and prohibitive training costs associated with pixel-space approaches. Third, we map the dense latent-space uncertainty to interpretable pixel-level uncertainty in the RGB space for intuitive visualization, providing high-resolution uncertainty heatmaps that identify untrustworthy regions. Through extensive experiments on large-scale robot learning datasets (Bridge and DROID) and real-world evaluations, we demonstrate that our method not only provides calibrated uncertainty estimates within the training distribution, but also enables effective out-of-distribution detection.