STIV: Scalable Text and Image Conditioned Video Generation
Zongyu Lin, Wei Liu, Chen Chen, Jiasen Lu, Wenze Hu, Tsu-Jui Fu, Jesse Allardice, Zhengfeng Lai, Liangchen Song, Bowen Zhang, Cha Chen, Yiran Fei, Yifan Jiang, Lezhi Li, Yizhou Sun, Kai-Wei Chang, Yinfei Yang
2024-12-11

Summary
This paper talks about STIV, a new method for generating videos from text and image inputs quickly and efficiently, improving the way AI can create visual content.
What's the problem?
While video generation technology has advanced, there is still a need for a clear and effective method that allows AI to create high-quality videos from text descriptions or images. Existing models often struggle with consistency and require a lot of resources to produce good results.
What's the solution?
The authors developed STIV, which combines text and image conditioning to generate videos. It uses a four-step process that integrates information from both text and images into a Diffusion Transformer model. This allows STIV to create videos based on descriptions (text-to-video) or starting images (image-to-video) effectively. The system can also adapt to various tasks like predicting future frames or generating longer videos without needing extensive retraining.
Why it matters?
This research is important because it enhances the capabilities of AI in video generation, making it easier for creators to produce engaging content. By providing a scalable and efficient method, STIV can be used in many applications, such as filmmaking, advertising, and social media, allowing for more creative expression and faster content creation.
Abstract
The field of video generation has made remarkable advancements, yet there remains a pressing need for a clear, systematic recipe that can guide the development of robust and scalable models. In this work, we present a comprehensive study that systematically explores the interplay of model architectures, training recipes, and data curation strategies, culminating in a simple and scalable text-image-conditioned video generation method, named STIV. Our framework integrates image condition into a Diffusion Transformer (DiT) through frame replacement, while incorporating text conditioning via a joint image-text conditional classifier-free guidance. This design enables STIV to perform both text-to-video (T2V) and text-image-to-video (TI2V) tasks simultaneously. Additionally, STIV can be easily extended to various applications, such as video prediction, frame interpolation, multi-view generation, and long video generation, etc. With comprehensive ablation studies on T2I, T2V, and TI2V, STIV demonstrate strong performance, despite its simple design. An 8.7B model with 512 resolution achieves 83.1 on VBench T2V, surpassing both leading open and closed-source models like CogVideoX-5B, Pika, Kling, and Gen-3. The same-sized model also achieves a state-of-the-art result of 90.1 on VBench I2V task at 512 resolution. By providing a transparent and extensible recipe for building cutting-edge video generation models, we aim to empower future research and accelerate progress toward more versatile and reliable video generation solutions.