< Explain other AI papers

SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song Generation

Zihan Liu, Shuangrui Ding, Zhixiong Zhang, Xiaoyi Dong, Pan Zhang, Yuhang Zang, Yuhang Cao, Dahua Lin, Jiaqi Wang

2025-02-20

SongGen: A Single Stage Auto-regressive Transformer for Text-to-Song
  Generation

Summary

This paper talks about SongGen, a new AI system that can create entire songs, including both vocals and music, from just text descriptions. It's like having a virtual musician who can compose and perform songs based on your written ideas.

What's the problem?

Creating songs from text is really hard for computers because music is complex and there isn't a lot of data to learn from. Current methods often use multiple steps to create songs, which makes the process slow and complicated.

What's the solution?

The researchers created SongGen, an AI model that can generate entire songs in one go. It can control different aspects of the song like lyrics, instruments, genre, mood, and even copy a specific voice from a short clip. SongGen can create songs in two ways: mixing vocals and music together or creating them separately. They also made a system to prepare data for the AI to learn from, ensuring high-quality results.

Why it matters?

This matters because it could revolutionize how we create music. Anyone could potentially turn their ideas into songs without needing musical skills. It could help musicians experiment with new ideas quickly, or even create personalized music for games, videos, or other media. By making the system open-source, the researchers are inviting others to build upon and improve this technology, which could lead to even more exciting developments in AI-generated music.

Abstract

Text-to-song generation, the task of creating vocals and accompaniment from textual inputs, poses significant challenges due to domain complexity and data scarcity. Existing approaches often employ multi-stage generation procedures, resulting in cumbersome training and inference pipelines. In this paper, we propose SongGen, a fully open-source, single-stage auto-regressive transformer designed for controllable song generation. The proposed model facilitates fine-grained control over diverse musical attributes, including lyrics and textual descriptions of instrumentation, genre, mood, and timbre, while also offering an optional three-second reference clip for voice cloning. Within a unified auto-regressive framework, SongGen supports two output modes: mixed mode, which generates a mixture of vocals and accompaniment directly, and dual-track mode, which synthesizes them separately for greater flexibility in downstream applications. We explore diverse token pattern strategies for each mode, leading to notable improvements and valuable insights. Furthermore, we design an automated data preprocessing pipeline with effective quality control. To foster community engagement and future research, we will release our model weights, training code, annotated data, and preprocessing pipeline. The generated samples are showcased on our project page at https://liuzh-19.github.io/SongGen/ , and the code will be available at https://github.com/LiuZH-19/SongGen .