< Explain other AI papers

Movie Gen: A Cast of Media Foundation Models

Adam Polyak, Amit Zohar, Andrew Brown, Andros Tjandra, Animesh Sinha, Ann Lee, Apoorv Vyas, Bowen Shi, Chih-Yao Ma, Ching-Yao Chuang, David Yan, Dhruv Choudhary, Dingkang Wang, Geet Sethi, Guan Pang, Haoyu Ma, Ishan Misra, Ji Hou, Jialiang Wang, Kiran Jagadeesh, Kunpeng Li, Luxin Zhang

2024-10-18

Movie Gen: A Cast of Media Foundation Models

Summary

This paper introduces Movie Gen, a set of advanced models that can create high-quality videos with synchronized audio and offer features like personalized video generation and precise editing based on user input.

What's the problem?

While large language models (LLMs) have made great progress in generating text, there has been a challenge in creating videos that are not only visually appealing but also accurately match the instructions given by users. Traditional video generation methods often struggle with producing high-quality content that includes both audio and visual elements, making it hard for users to create the videos they envision.

What's the solution?

To solve this problem, the authors developed Movie Gen, which includes a powerful video generation model capable of producing 1080p HD videos that can last up to 16 seconds. This model is trained on a vast amount of video and image data, allowing it to understand complex prompts and generate videos that correspond to those prompts. Additionally, Movie Gen allows users to personalize videos by using their images and offers features for editing videos through simple instructions. The authors also implemented various technical improvements to enhance the model's performance and efficiency.

Why it matters?

This research is significant because it sets a new standard for video generation technology, making it easier for users to create customized and high-quality videos. By improving how AI can generate and edit videos, Movie Gen opens up new possibilities in fields like entertainment, marketing, and education, where engaging visual content is essential.

Abstract

We present Movie Gen, a cast of foundation models that generates high-quality, 1080p HD videos with different aspect ratios and synchronized audio. We also show additional capabilities such as precise instruction-based video editing and generation of personalized videos based on a user's image. Our models set a new state-of-the-art on multiple tasks: text-to-video synthesis, video personalization, video editing, video-to-audio generation, and text-to-audio generation. Our largest video generation model is a 30B parameter transformer trained with a maximum context length of 73K video tokens, corresponding to a generated video of 16 seconds at 16 frames-per-second. We show multiple technical innovations and simplifications on the architecture, latent spaces, training objectives and recipes, data curation, evaluation protocols, parallelization techniques, and inference optimizations that allow us to reap the benefits of scaling pre-training data, model size, and training compute for training large scale media generation models. We hope this paper helps the research community to accelerate progress and innovation in media generation models. All videos from this paper are available at https://go.fb.me/MovieGenResearchVideos.