Tuning-Free Multi-Event Long Video Generation via Synchronized Coupled Sampling
Subin Kim, Seoung Wug Oh, Jui-Hsien Wang, Joon-Young Lee, Jinwoo Shin
2025-03-12
Summary
This paper talks about SynCoS, a method to create long videos from text prompts without retraining AI models, keeping scenes consistent over time by syncing how the AI cleans up video frames.
What's the problem?
Current AI tools make short videos well but struggle with long ones, causing scenes to drift off-topic or look choppy because they focus only on nearby frames, not the whole video.
What's the solution?
SynCoS uses two AI cleanup methods together—one for smooth frame-to-frame transitions and another for overall scene consistency—while syncing their steps to keep the entire video aligned with the text prompts.
Why it matters?
This helps filmmakers, game developers, and content creators make longer, more coherent videos from text descriptions faster, without needing expensive retraining of AI models.
Abstract
While recent advancements in text-to-video diffusion models enable high-quality short video generation from a single prompt, generating real-world long videos in a single pass remains challenging due to limited data and high computational costs. To address this, several works propose tuning-free approaches, i.e., extending existing models for long video generation, specifically using multiple prompts to allow for dynamic and controlled content changes. However, these methods primarily focus on ensuring smooth transitions between adjacent frames, often leading to content drift and a gradual loss of semantic coherence over longer sequences. To tackle such an issue, we propose Synchronized Coupled Sampling (SynCoS), a novel inference framework that synchronizes denoising paths across the entire video, ensuring long-range consistency across both adjacent and distant frames. Our approach combines two complementary sampling strategies: reverse and optimization-based sampling, which ensure seamless local transitions and enforce global coherence, respectively. However, directly alternating between these samplings misaligns denoising trajectories, disrupting prompt guidance and introducing unintended content changes as they operate independently. To resolve this, SynCoS synchronizes them through a grounded timestep and a fixed baseline noise, ensuring fully coupled sampling with aligned denoising paths. Extensive experiments show that SynCoS significantly improves multi-event long video generation, achieving smoother transitions and superior long-range coherence, outperforming previous approaches both quantitatively and qualitatively.