TaDiCodec: Text-aware Diffusion Speech Tokenizer for Speech Language Modeling
Yuancheng Wang, Dekun Chen, Xueyao Zhang, Junan Zhang, Jiaqi Li, Zhizheng Wu
2025-08-26
Summary
This paper introduces a new way to convert speech into a format that's easier for computers to understand and work with, called TaDiCodec. It's a tool designed to be a key part of building better speech-based AI systems.
What's the problem?
Current methods for converting speech into a digital representation have several drawbacks. They often require complicated setups with many layers of processing or need to run at very high speeds to capture all the details. Also, they frequently rely on other AI models that have already been trained for different tasks, and the training process itself is often complex and takes multiple steps.
What's the solution?
The researchers developed TaDiCodec, which uses a technique called a diffusion autoencoder. This allows it to compress speech into a very small amount of data – as little as 0.0875 kbps – while still maintaining high quality. Importantly, it doesn't need any pre-trained models or a complicated training process; it learns everything end-to-end. It also uses text information to help improve the quality of the reconstructed speech.
Why it matters?
This work is important because it simplifies the process of converting speech into a usable format for AI. By reducing the complexity and resource requirements, TaDiCodec makes it easier to build more efficient and effective speech recognition and speech synthesis systems. It also shows promise for creating realistic text-to-speech technology and reduces the difference between how speech sounds when it's reconstructed versus when it's originally generated.
Abstract
Speech tokenizers serve as foundational components for speech language models, yet current designs exhibit several limitations, including: 1) dependence on multi-layer residual vector quantization structures or high frame rates, 2) reliance on auxiliary pre-trained models for semantic distillation, and 3) requirements for complex two-stage training processes. In this work, we introduce the Text-aware Diffusion Transformer Speech Codec (TaDiCodec), a novel approach designed to overcome these challenges. TaDiCodec employs end-to-end optimization for quantization and reconstruction through a diffusion autoencoder, while integrating text guidance into the diffusion decoder to enhance reconstruction quality and achieve optimal compression. TaDiCodec achieves an extremely low frame rate of 6.25 Hz and a corresponding bitrate of 0.0875 kbps with a single-layer codebook for 24 kHz speech, while maintaining superior performance on critical speech generation evaluation metrics such as Word Error Rate (WER), speaker similarity (SIM), and speech quality (UTMOS). Notably, TaDiCodec employs a single-stage, end-to-end training paradigm, and obviating the need for auxiliary pre-trained models. We also validate the compatibility of TaDiCodec in language model based zero-shot text-to-speech with both autoregressive modeling and masked generative modeling, demonstrating its effectiveness and efficiency for speech language modeling, as well as a significantly small reconstruction-generation gap. We will open source our code and model checkpoints. Audio samples are are available at https:/tadicodec.github.io/. We release code and model checkpoints at https:/github.com/HeCheng0625/Diffusion-Speech-Tokenizer.