YuE: Scaling Open Foundation Models for Long-Form Music Generation
Ruibin Yuan, Hanfeng Lin, Shuyue Guo, Ge Zhang, Jiahao Pan, Yongyi Zang, Haohe Liu, Yiming Liang, Wenye Ma, Xingjian Du, Xinrun Du, Zhen Ye, Tianyu Zheng, Yinghao Ma, Minghao Liu, Zeyue Tian, Ziya Zhou, Liumeng Xue, Xingwei Qu, Yizhi Li, Shangda Wu, Tianhao Shen
2025-03-12
Summary
This paper talks about YuE, an open-source AI that turns lyrics into full-length songs (up to 5 minutes) with vocals and background music, keeping everything in sync and sounding good.
What's the problem?
Existing AI music tools can’t make long songs with vocals that match lyrics well, often sounding messy or out of sync because of the complexity of music and lack of good training data.
What's the solution?
YuE uses smart tricks like splitting vocals and instruments into separate tracks, breaking songs into smaller sections (like verses and choruses) to keep lyrics and music aligned, and training on lots of music data to handle different styles and languages.
Why it matters?
This helps musicians and creators make complete songs faster, supports multiple languages and styles, and gives AI better tools to understand and generate music for movies, games, or education.
Abstract
We tackle the task of long-form music generation--particularly the challenging lyrics-to-song problem--by introducing YuE, a family of open foundation models based on the LLaMA2 architecture. Specifically, YuE scales to trillions of tokens and generates up to five minutes of music while maintaining lyrical alignment, coherent musical structure, and engaging vocal melodies with appropriate accompaniment. It achieves this through (1) track-decoupled next-token prediction to overcome dense mixture signals, (2) structural progressive conditioning for long-context lyrical alignment, and (3) a multitask, multiphase pre-training recipe to converge and generalize. In addition, we redesign the in-context learning technique for music generation, enabling versatile style transfer (e.g., converting Japanese city pop into an English rap while preserving the original accompaniment) and bidirectional generation. Through extensive evaluation, we demonstrate that YuE matches or even surpasses some of the proprietary systems in musicality and vocal agility. In addition, fine-tuning YuE enables additional controls and enhanced support for tail languages. Furthermore, beyond generation, we show that YuE's learned representations can perform well on music understanding tasks, where the results of YuE match or exceed state-of-the-art methods on the MARBLE benchmark. Keywords: lyrics2song, song generation, long-form, foundation model, music generation