Long Context Tuning for Video Generation
Yuwei Guo, Ceyuan Yang, Ziyan Yang, Zhibei Ma, Zhijie Lin, Zhenheng Yang, Dahua Lin, Lu Jiang
2025-03-14
Summary
This paper introduces a new training method called Long Context Tuning (LCT) to improve the consistency of multi-shot videos generated by AI, making them more like real narrative videos.
What's the problem?
Existing AI models can create realistic, single-shot videos, but they struggle to maintain consistency across multiple scenes in a longer, narrative-driven video. This makes it difficult to create coherent stories.
What's the solution?
The researchers developed LCT, which expands the model's understanding to include all shots in a scene. This allows the model to learn scene-level consistency and generate multiple shots that fit together, without needing extra parameters.
Why it matters?
This work matters because it helps bridge the gap between single-shot video generation and creating full, multi-scene narrative videos, paving the way for more practical and creative visual content creation.
Abstract
Recent advances in video generation can produce realistic, minute-long single-shot videos with scalable diffusion transformers. However, real-world narrative videos require multi-shot scenes with visual and dynamic consistency across shots. In this work, we introduce Long Context Tuning (LCT), a training paradigm that expands the context window of pre-trained single-shot video diffusion models to learn scene-level consistency directly from data. Our method expands full attention mechanisms from individual shots to encompass all shots within a scene, incorporating interleaved 3D position embedding and an asynchronous noise strategy, enabling both joint and auto-regressive shot generation without additional parameters. Models with bidirectional attention after LCT can further be fine-tuned with context-causal attention, facilitating auto-regressive generation with efficient KV-cache. Experiments demonstrate single-shot models after LCT can produce coherent multi-shot scenes and exhibit emerging capabilities, including compositional generation and interactive shot extension, paving the way for more practical visual content creation. See https://guoyww.github.io/projects/long-context-video/ for more details.