From Denoising to Refining: A Corrective Framework for Vision-Language Diffusion Model
Yatai Ji, Teng Wang, Yuying Ge, Zhiheng Liu, Sidi Yang, Ying Shan, Ping Luo
2025-10-27
Summary
This paper introduces a new way to improve how AI models generate text and images from both text prompts and existing images, focusing on making the process more reliable and accurate.
What's the problem?
Current AI models that create things step-by-step often struggle with a 'snowball effect' of errors, meaning if the model makes a small mistake early on, it messes up everything that comes after, leading to nonsensical or factually incorrect results. This happens because these models usually just try to 'fill in the blanks' and don't actively check their work as they go.
What's the solution?
The researchers developed a system called ReDiff that teaches the AI model to be its own editor. It works in two steps: first, the model learns to fix mistakes that are intentionally added to its work. Then, it learns to correct its *own* mistakes during the creation process, guided by examples of how an expert would fix similar errors. This 'revise-as-you-go' approach prevents small errors from becoming big problems.
Why it matters?
This research is important because it makes these AI models much more dependable and capable of producing high-quality content. By stopping the chain reaction of errors, ReDiff allows for faster and more accurate generation of images and text, opening up possibilities for more useful and creative applications of AI.
Abstract
Discrete diffusion models have emerged as a promising direction for vision-language tasks, offering bidirectional context modeling and theoretical parallelization. However, their practical application is severely hindered by a train-inference discrepancy, which leads to catastrophic error cascades: initial token errors during parallel decoding pollute the generation context, triggering a chain reaction of compounding errors and leading to syntactic errors and semantic hallucinations. To address this fundamental challenge, we reframe the generation process from passive denoising to active refining. We introduce ReDiff, a refining-enhanced diffusion framework that teaches the model to identify and correct its own errors. Our approach features a two-stage training process: first, we instill a foundational revision capability by training the model to revise synthetic errors; second, we implement a novel online self-correction loop where the model is explicitly trained to revise its own flawed drafts by learning from an expert's corrections. This mistake-driven learning endows the model with the crucial ability to revisit and refine its already generated output, effectively breaking the error cascade. Extensive experiments demonstrate that ReDiff significantly improves the coherence and factual accuracy of generated content, enabling stable and efficient parallel generation far superior to traditional denoising methods. Our codes and models are available at https://rediff-hku.github.io/.