DiffusionBrowser: Interactive Diffusion Previews via Multi-Branch Decoders
Susung Hong, Chongjian Ge, Zhifei Zhang, Jui-Hsien Wang
2025-12-16
Summary
This paper introduces DiffusionBrowser, a new system for making video creation with AI more interactive and understandable.
What's the problem?
Current AI video generators, called diffusion models, are really good at making videos, but they have some downsides. They aren't always precise, they take a long time to generate videos, and it's hard to know what's happening *while* the video is being created – it's like a black box. Users have to wait and hope for the best, without being able to give feedback or see progress easily.
What's the solution?
The researchers created DiffusionBrowser, which works with any existing video AI model. It's a fast decoder that can quickly create previews of the video at any stage of the generation process. These previews show not just the image, but also information about the scene itself, and they're generated much faster than real-time. This allows users to see how the video is developing and even influence the creation by making small changes during the process, like adjusting the style or content.
Why it matters?
DiffusionBrowser is important because it gives users more control and insight into AI video generation. It makes the process less mysterious and allows for more creative control. Also, by studying how the decoder works, the researchers gained a better understanding of how these AI models actually *think* when creating videos, which could lead to even better AI in the future.
Abstract
Video diffusion models have revolutionized generative video synthesis, but they are imprecise, slow, and can be opaque during generation -- keeping users in the dark for a prolonged period. In this work, we propose DiffusionBrowser, a model-agnostic, lightweight decoder framework that allows users to interactively generate previews at any point (timestep or transformer block) during the denoising process. Our model can generate multi-modal preview representations that include RGB and scene intrinsics at more than 4times real-time speed (less than 1 second for a 4-second video) that convey consistent appearance and motion to the final video. With the trained decoder, we show that it is possible to interactively guide the generation at intermediate noise steps via stochasticity reinjection and modal steering, unlocking a new control capability. Moreover, we systematically probe the model using the learned decoders, revealing how scene, object, and other details are composed and assembled during the otherwise black-box denoising process.