< Explain other AI papers

Stable Video Infinity: Infinite-Length Video Generation with Error Recycling

Wuyang Li, Wentao Pan, Po-Chien Luan, Yang Gao, Alexandre Alahi

2025-10-14

Stable Video Infinity: Infinite-Length Video Generation with Error Recycling

Summary

This paper introduces a new method called Stable Video Infinity (SVI) for creating videos that can, in theory, go on forever while still looking realistic and consistent.

What's the problem?

Existing methods for making long videos struggle because errors build up over time, leading to unrealistic or repetitive scenes. These methods try to fix this by tweaking how the video is created, but they can only extend a video from a single starting point and don't handle complex storylines well. The core issue isn't just error buildup, but that the system is trained on perfect data, then asked to create videos based on its *own* imperfect creations, which introduces more errors.

What's the solution?

SVI tackles this problem with a technique called Error-Recycling Fine-Tuning. Essentially, the system learns from its own mistakes. It intentionally adds errors into the video creation process, then uses those errors as feedback to improve its future predictions. It's like practicing a skill and learning from where you mess up. This involves injecting past errors, calculating how far off the predictions are, and then storing those errors to use again later, creating a cycle of learning and correction.

Why it matters?

This research is important because it opens the door to generating much longer and more complex videos automatically. It allows for videos that aren't limited by the accumulation of errors and can adapt to different inputs like sound or movement, potentially revolutionizing areas like filmmaking, gaming, and virtual reality by making it easier to create dynamic and engaging content.

Abstract

We propose Stable Video Infinity (SVI) that is able to generate infinite-length videos with high temporal consistency, plausible scene transitions, and controllable streaming storylines. While existing long-video methods attempt to mitigate accumulated errors via handcrafted anti-drifting (e.g., modified noise scheduler, frame anchoring), they remain limited to single-prompt extrapolation, producing homogeneous scenes with repetitive motions. We identify that the fundamental challenge extends beyond error accumulation to a critical discrepancy between the training assumption (seeing clean data) and the test-time autoregressive reality (conditioning on self-generated, error-prone outputs). To bridge this hypothesis gap, SVI incorporates Error-Recycling Fine-Tuning, a new type of efficient training that recycles the Diffusion Transformer (DiT)'s self-generated errors into supervisory prompts, thereby encouraging DiT to actively identify and correct its own errors. This is achieved by injecting, collecting, and banking errors through closed-loop recycling, autoregressively learning from error-injected feedback. Specifically, we (i) inject historical errors made by DiT to intervene on clean inputs, simulating error-accumulated trajectories in flow matching; (ii) efficiently approximate predictions with one-step bidirectional integration and calculate errors with residuals; (iii) dynamically bank errors into replay memory across discretized timesteps, which are resampled for new input. SVI is able to scale videos from seconds to infinite durations with no additional inference cost, while remaining compatible with diverse conditions (e.g., audio, skeleton, and text streams). We evaluate SVI on three benchmarks, including consistent, creative, and conditional settings, thoroughly verifying its versatility and state-of-the-art role.