AV-DiT: Efficient Audio-Visual Diffusion Transformer for Joint Audio and Video Generation
Kai Wang, Shijian Deng, Jing Shi, Dimitrios Hatzinakos, Yapeng Tian
2024-06-13

Summary
This paper presents AV-DiT, a new model designed to efficiently generate realistic videos that include both audio and visual elements. It aims to improve the way machines create content that combines sound and images.
What's the problem?
While existing models can generate high-quality images, videos, and audio separately, they often struggle to effectively combine these elements into a single cohesive output. Additionally, many of these models require a lot of computational power and resources, making them less practical for real-world applications.
What's the solution?
AV-DiT addresses these challenges by using a shared backbone that has been pre-trained on image data. This allows the model to generate both audio and video without needing to train separate models for each type of content. The video part of AV-DiT includes a special layer that helps maintain consistency over time, ensuring that the video flows smoothly. The model also uses lightweight components that help the audio and visual parts work together effectively, making sure they are aligned. Through extensive testing on various datasets, AV-DiT has shown it can produce high-quality audio-visual content with fewer parameters than previous models.
Why it matters?
This research is significant because it enhances the ability of AI systems to create realistic videos that include synchronized sound. By reducing the complexity and resource requirements of such models, AV-DiT makes it easier to develop applications in fields like entertainment, education, and virtual reality where combining audio and visuals is essential.
Abstract
Recent Diffusion Transformers (DiTs) have shown impressive capabilities in generating high-quality single-modality content, including images, videos, and audio. However, it is still under-explored whether the transformer-based diffuser can efficiently denoise the Gaussian noises towards superb multimodal content creation. To bridge this gap, we introduce AV-DiT, a novel and efficient audio-visual diffusion transformer designed to generate high-quality, realistic videos with both visual and audio tracks. To minimize model complexity and computational costs, AV-DiT utilizes a shared DiT backbone pre-trained on image-only data, with only lightweight, newly inserted adapters being trainable. This shared backbone facilitates both audio and video generation. Specifically, the video branch incorporates a trainable temporal attention layer into a frozen pre-trained DiT block for temporal consistency. Additionally, a small number of trainable parameters adapt the image-based DiT block for audio generation. An extra shared DiT block, equipped with lightweight parameters, facilitates feature interaction between audio and visual modalities, ensuring alignment. Extensive experiments on the AIST++ and Landscape datasets demonstrate that AV-DiT achieves state-of-the-art performance in joint audio-visual generation with significantly fewer tunable parameters. Furthermore, our results highlight that a single shared image generative backbone with modality-specific adaptations is sufficient for constructing a joint audio-video generator. Our source code and pre-trained models will be released.