Video-Infinity: Distributed Long Video Generation
Zhenxiong Tan, Xingyi Yang, Songhua Liu, Xinchao Wang
2024-06-25

Summary
This paper introduces Video-Infinity, a new system designed to generate long videos quickly using multiple GPUs. It addresses the challenges of creating longer video sequences by improving how video generation models work together.
What's the problem?
Most existing video generation models can only produce short clips, usually lasting just a few seconds. This limitation is due to high memory requirements and slow processing times when using a single GPU. While splitting the workload across multiple GPUs could help, it creates challenges in ensuring all GPUs communicate effectively and adapting models that are typically trained on short videos to work with longer ones.
What's the solution?
The authors developed Video-Infinity, which uses a distributed approach to video generation. They introduced two key mechanisms: Clip parallelism, which helps share context information among multiple GPUs efficiently, and Dual-scope attention, which balances local and global information to maintain coherence in the video. With this setup, Video-Infinity can generate videos up to 2,300 frames in about 5 minutes, making it significantly faster than previous methods.
Why it matters?
This research is important because it advances the field of video generation, allowing for the creation of longer and more complex videos at high speeds. By overcoming the limitations of existing models, Video-Infinity opens up new possibilities for applications in entertainment, education, and more, where longer video content is increasingly in demand.
Abstract
Diffusion models have recently achieved remarkable results for video generation. Despite the encouraging performances, the generated videos are typically constrained to a small number of frames, resulting in clips lasting merely a few seconds. The primary challenges in producing longer videos include the substantial memory requirements and the extended processing time required on a single GPU. A straightforward solution would be to split the workload across multiple GPUs, which, however, leads to two issues: (1) ensuring all GPUs communicate effectively to share timing and context information, and (2) modifying existing video diffusion models, which are usually trained on short sequences, to create longer videos without additional training. To tackle these, in this paper we introduce Video-Infinity, a distributed inference pipeline that enables parallel processing across multiple GPUs for long-form video generation. Specifically, we propose two coherent mechanisms: Clip parallelism and Dual-scope attention. Clip parallelism optimizes the gathering and sharing of context information across GPUs which minimizes communication overhead, while Dual-scope attention modulates the temporal self-attention to balance local and global contexts efficiently across the devices. Together, the two mechanisms join forces to distribute the workload and enable the fast generation of long videos. Under an 8 x Nvidia 6000 Ada GPU (48G) setup, our method generates videos up to 2,300 frames in approximately 5 minutes, enabling long video generation at a speed 100 times faster than the prior methods.