Continuously Augmented Discrete Diffusion model for Categorical Generative Modeling
Huangjie Zheng, Shansan Gong, Ruixiang Zhang, Tianrong Chen, Jiatao Gu, Mingyuan Zhou, Navdeep Jaitly, Yizhe Zhang
2025-10-06
Summary
This paper introduces a new way to improve how discrete diffusion models work, specifically when dealing with missing information in the data they're generating.
What's the problem?
Traditional discrete diffusion models struggle when parts of the data are 'masked' or hidden. They treat these missing parts as completely unknown, essentially creating a blank space where information should be. This leads to a loss of context and makes it harder to generate realistic and coherent outputs because the model doesn't have enough clues about what should go in those missing spots.
What's the solution?
The researchers developed a method called Continuously Augmented Discrete Diffusion, or CADD. Instead of treating masked areas as empty, CADD adds a continuous 'hint' alongside the discrete data. Think of it like this: the discrete data is like individual puzzle pieces, and the continuous part is a blurry image underneath that gives clues about what the missing pieces should look like. This 'hint' helps the model fill in the gaps more intelligently during the generation process, using the surrounding information to guide its decisions.
Why it matters?
This is important because it improves the quality of things generated by these models, like text, images, and even computer code. By providing a better way to handle missing information, CADD allows the models to create more diverse and accurate outputs, offering a balance between exploring different possibilities and sticking closely to the given context.
Abstract
Standard discrete diffusion models treat all unobserved states identically by mapping them to an absorbing [MASK] token. This creates an 'information void' where semantic information that could be inferred from unmasked tokens is lost between denoising steps. We introduce Continuously Augmented Discrete Diffusion (CADD), a framework that augments the discrete state space with a paired diffusion in a continuous latent space. This yields graded, gradually corrupted states in which masked tokens are represented by noisy yet informative latent vectors rather than collapsed 'information voids'. At each reverse step, CADD may leverage the continuous latent as a semantic hint to guide discrete denoising. The design is clean and compatible with existing discrete diffusion training. At sampling time, the strength and choice of estimator for the continuous latent vector enables a controlled trade-off between mode-coverage (generating diverse outputs) and mode-seeking (generating contextually precise outputs) behaviors. Empirically, we demonstrate CADD improves generative quality over mask-based diffusion across text generation, image synthesis, and code modeling, with consistent gains on both qualitative and quantitative metrics against strong discrete baselines.