< Explain other AI papers

MotionStream: Real-Time Video Generation with Interactive Motion Controls

Joonghyuk Shin, Zhengqi Li, Richard Zhang, Jun-Yan Zhu, Jaesik Park, Eli Schechtman, Xun Huang

2025-11-04

MotionStream: Real-Time Video Generation with Interactive Motion Controls

Summary

This paper introduces MotionStream, a new system for creating videos from text and motion inputs much faster than existing methods, allowing for real-time interaction.

What's the problem?

Current methods for generating videos based on text and movement are slow, often taking minutes to create a single video, and they can't react to changes as they're being made. This makes them unsuitable for applications needing immediate feedback, like interactive design or real-time control. Specifically, the challenge is to create long videos quickly without losing quality or requiring massive computing power as the video gets longer.

What's the solution?

The researchers started with a powerful but slow text-to-video model and 'taught' a faster, simpler model to mimic its results. This is done using a technique called 'distillation'. They also developed a special type of attention mechanism that focuses on the most relevant parts of the video while generating new frames, using a 'sliding window' and 'attention sinks' to efficiently handle long videos. This allows the model to predict future frames without needing to remember everything from the beginning, keeping the process fast and stable.

Why it matters?

MotionStream is a significant step forward because it allows for real-time video generation. Imagine being able to draw a path for a character and see them move along it instantly, or controlling a camera in a virtual world with no delay. This opens up possibilities for more interactive and creative video experiences, and it's much faster – about 100 times faster – than previous approaches while still producing high-quality results.

Abstract

Current motion-conditioned video generation methods suffer from prohibitive latency (minutes per video) and non-causal processing that prevents real-time interaction. We present MotionStream, enabling sub-second latency with up to 29 FPS streaming generation on a single GPU. Our approach begins by augmenting a text-to-video model with motion control, which generates high-quality videos that adhere to the global text prompt and local motion guidance, but does not perform inference on the fly. As such, we distill this bidirectional teacher into a causal student through Self Forcing with Distribution Matching Distillation, enabling real-time streaming inference. Several key challenges arise when generating videos of long, potentially infinite time-horizons: (1) bridging the domain gap from training on finite length and extrapolating to infinite horizons, (2) sustaining high quality by preventing error accumulation, and (3) maintaining fast inference, without incurring growth in computational cost due to increasing context windows. A key to our approach is introducing carefully designed sliding-window causal attention, combined with attention sinks. By incorporating self-rollout with attention sinks and KV cache rolling during training, we properly simulate inference-time extrapolations with a fixed context window, enabling constant-speed generation of arbitrarily long videos. Our models achieve state-of-the-art results in motion following and video quality while being two orders of magnitude faster, uniquely enabling infinite-length streaming. With MotionStream, users can paint trajectories, control cameras, or transfer motion, and see results unfold in real-time, delivering a truly interactive experience.