EzAudio: Enhancing Text-to-Audio Generation with Efficient Diffusion Transformer
Jiarui Hai, Yong Xu, Hao Zhang, Chenxing Li, Helin Wang, Mounya Elhilali, Dong Yu
2024-09-18

Summary
This paper introduces EzAudio, a new model designed to generate high-quality audio from text prompts using an efficient diffusion transformer.
What's the problem?
Previous models for converting text to audio faced challenges such as low audio quality, high computational costs, and complex data preparation processes. These issues made it difficult to create realistic audio quickly and efficiently.
What's the solution?
EzAudio addresses these problems by using a novel approach that operates in the latent space of audio waveforms instead of traditional spectrograms. This allows for better audio quality without needing extra processing steps. The model features an optimized architecture that improves speed and stability during training. It also uses a data-efficient training strategy that incorporates unlabeled data and human-labeled data, allowing it to learn effectively even with limited resources. Additionally, it introduces a method called classifier-free guidance (CFG) that simplifies the process of aligning prompts with audio generation, making it easier to achieve high-quality results.
Why it matters?
This research is significant because it represents a major advancement in text-to-audio generation technology. By improving the efficiency and quality of audio generation, EzAudio can be used in various applications such as creating sound effects for videos, enhancing virtual reality experiences, and developing assistive technologies that convert text into speech more effectively.
Abstract
Latent diffusion models have shown promising results in text-to-audio (T2A) generation tasks, yet previous models have encountered difficulties in generation quality, computational cost, diffusion sampling, and data preparation. In this paper, we introduce EzAudio, a transformer-based T2A diffusion model, to handle these challenges. Our approach includes several key innovations: (1) We build the T2A model on the latent space of a 1D waveform Variational Autoencoder (VAE), avoiding the complexities of handling 2D spectrogram representations and using an additional neural vocoder. (2) We design an optimized diffusion transformer architecture specifically tailored for audio latent representations and diffusion modeling, which enhances convergence speed, training stability, and memory usage, making the training process easier and more efficient. (3) To tackle data scarcity, we adopt a data-efficient training strategy that leverages unlabeled data for learning acoustic dependencies, audio caption data annotated by audio-language models for text-to-audio alignment learning, and human-labeled data for fine-tuning. (4) We introduce a classifier-free guidance (CFG) rescaling method that simplifies EzAudio by achieving strong prompt alignment while preserving great audio quality when using larger CFG scores, eliminating the need to struggle with finding the optimal CFG score to balance this trade-off. EzAudio surpasses existing open-source models in both objective metrics and subjective evaluations, delivering realistic listening experiences while maintaining a streamlined model structure, low training costs, and an easy-to-follow training pipeline. Code, data, and pre-trained models are released at: https://haidog-yaqub.github.io/EzAudio-Page/.