BroadWay: Boost Your Text-to-Video Generation Model in a Training-free Way
Jiazi Bu, Pengyang Ling, Pan Zhang, Tong Wu, Xiaoyi Dong, Yuhang Zang, Yuhang Cao, Dahua Lin, Jiaqi Wang
2024-10-10

Summary
This paper introduces BroadWay, a method designed to improve the quality of text-to-video (T2V) generation models without needing additional training or resources.
What's the problem?
Text-to-video generation models can create videos from text descriptions, but they often produce videos with issues like strange structures, inconsistent timing, and little motion, making the videos look almost static. These problems arise because the models struggle to manage how they pay attention to different parts of the video over time.
What's the solution?
BroadWay tackles these challenges by implementing two main techniques: Temporal Self-Guidance, which helps the model maintain consistency in how it tracks time across different parts of the video, and Fourier-based Motion Enhancement, which boosts the energy in the model's attention maps to create more dynamic motion. This approach enhances video quality without adding extra processing time or memory requirements.
Why it matters?
This research is significant because it allows for better quality video generation from text descriptions while keeping the process efficient. By improving how T2V models handle motion and timing, BroadWay can enhance applications in entertainment, education, and other fields that rely on high-quality video content.
Abstract
The text-to-video (T2V) generation models, offering convenient visual creation, have recently garnered increasing attention. Despite their substantial potential, the generated videos may present artifacts, including structural implausibility, temporal inconsistency, and a lack of motion, often resulting in near-static video. In this work, we have identified a correlation between the disparity of temporal attention maps across different blocks and the occurrence of temporal inconsistencies. Additionally, we have observed that the energy contained within the temporal attention maps is directly related to the magnitude of motion amplitude in the generated videos. Based on these observations, we present BroadWay, a training-free method to improve the quality of text-to-video generation without introducing additional parameters, augmenting memory or sampling time. Specifically, BroadWay is composed of two principal components: 1) Temporal Self-Guidance improves the structural plausibility and temporal consistency of generated videos by reducing the disparity between the temporal attention maps across various decoder blocks. 2) Fourier-based Motion Enhancement enhances the magnitude and richness of motion by amplifying the energy of the map. Extensive experiments demonstrate that BroadWay significantly improves the quality of text-to-video generation with negligible additional cost.