Continuous Diffusion Model for Language Modeling
Jaehyeong Jo, Sung Ju Hwang
2025-02-19
Summary
This paper talks about a new way to create language models using something called continuous diffusion models. It's like teaching a computer to understand and generate language by slowly adding and removing noise from words, but in a more smooth and continuous way than previous methods.
What's the problem?
Current diffusion models for language work with discrete data, which means they jump from one word to another without any in-between steps. This makes it hard for the models to refine their choices gradually, and they often lose important information during the process. Also, existing continuous models for language don't work as well as the discrete ones, and it's not clear how the two approaches are related.
What's the solution?
The researchers created a new continuous diffusion model for language that uses the geometry of how words are distributed. They found a way to connect discrete and continuous approaches, which led to a new design for the diffusion process. They also came up with a training method that doesn't need simulations and can handle the complexity of language. Their model works better than existing discrete diffusion models and almost as well as another popular type of language model called autoregressive models.
Why it matters?
This matters because it could lead to better AI systems for understanding and generating language. By using a continuous approach, the model can make more subtle improvements as it generates text, potentially leading to more natural and accurate language production. This could improve things like translation software, chatbots, and other AI applications that work with language. It also bridges the gap between different types of language models, which could help researchers develop even better models in the future.
Abstract
Diffusion models have emerged as a promising alternative to autoregressive models in modeling discrete categorical data. Yet diffusion models that directly work on discrete data space do not fully exploit the power of iterative refinement, as the signals are lost during the transition between discrete states. Existing continuous <PRE_TAG>diffusion models</POST_TAG> for discrete data have limited performance compared to discrete approaches, and the unclear link between them restricts the development of diffusion models for discrete data. In this work, we propose a continuous diffusion model for language modeling that incorporates the geometry of the underlying categorical distribution. We establish a connection between the discrete diffusion and continuous flow on the statistical manifold, and building on the analogy, we introduce a simple design for the diffusion process that generalizes previous discrete diffusion models. We further propose a simulation-free training framework based on radial symmetry and a simple technique to address the high dimensionality of the manifold. Comprehensive experiments on language modeling benchmarks and other modalities show that our method outperforms existing discrete diffusion models and approaches the performance of autoregressive models. Codes available at https://github.com/harryjo97/RDLM{https://github.com/harryjo97/RDLM}.