< Explain other AI papers

Remasking Discrete Diffusion Models with Inference-Time Scaling

Guanghan Wang, Yair Schiff, Subham Sekhar Sahoo, Volodymyr Kuleshov

2025-03-06

Remasking Discrete Diffusion Models with Inference-Time Scaling

Summary

This paper talks about a new method called ReMDM (remasking diffusion model) that improves how AI generates text and images using a technique called discrete diffusion

What's the problem?

Current discrete diffusion models can't fix mistakes once they generate a word or part of an image. This limits their ability to improve their output over time, which is something that makes other AI models successful

What's the solution?

The researchers created ReMDM, which allows discrete diffusion models to go back and correct mistakes they've made. It works by 'remasking' or hiding parts of the generated content and then refining it. This method can be applied to existing AI models without needing to retrain them completely

Why it matters?

This matters because it makes AI text and image generation more flexible and higher quality. By allowing the AI to correct its mistakes, it can produce results that are closer to what humans would create, especially when given more time to work. This could lead to better AI writing assistants, more realistic AI-generated images, and even improvements in scientific applications like designing new molecules

Abstract

Part of the success of diffusion models stems from their ability to perform iterative refinement, i.e., repeatedly correcting outputs during generation. However, modern masked discrete diffusion lacks this capability: when a token is generated, it cannot be updated again, even when it introduces an error. Here, we address this limitation by introducing the remasking diffusion model (ReMDM) sampler, a method that can be applied to pretrained masked diffusion models in a principled way and that is derived from a discrete diffusion model with a custom remasking backward process. Most interestingly, ReMDM endows discrete diffusion with a form of inference-time compute scaling. By increasing the number of sampling steps, ReMDM generates natural language outputs that approach the quality of autoregressive models, whereas when the computation budget is limited, ReMDM better maintains quality. ReMDM also improves sample quality of masked diffusion models for discretized images, and in scientific domains such as molecule design, ReMDM facilitates diffusion guidance and pushes the Pareto frontier of controllability relative to classical masking and uniform noise diffusion. We provide the code along with a blog post on the project page: https://remdm.github.io.