Feynman-Kac Correctors in Diffusion: Annealing, Guidance, and Product of Experts
Marta Skreta, Tara Akhound-Sadegh, Viktor Ohanesian, Roberto Bondesan, Alán Aspuru-Guzik, Arnaud Doucet, Rob Brekelmans, Alexander Tong, Kirill Neklyudov
2025-03-11
Summary
This paper talks about Feynman-Kac Correctors (FKCs), a smart way to control how AI generates images, molecules, or other data by mixing multiple pre-trained models and adjusting their behavior during the creation process.
What's the problem?
Current AI models for generating data (like images or molecules) can’t easily combine different models or adjust their output styles without extra steps or losing quality.
What's the solution?
FKCs use a math formula (Feynman-Kac) to blend models smoothly and a sampling trick (SMC) to refine results, letting AI mix models like ingredients in a recipe to create better outputs.
Why it matters?
This helps scientists and artists generate complex designs (like new drugs or artwork) faster and more accurately by reusing existing models instead of training new ones from scratch.
Abstract
While score-based generative models are the model of choice across diverse domains, there are limited tools available for controlling inference-time behavior in a principled manner, e.g. for composing multiple pretrained models. Existing classifier-free guidance methods use a simple heuristic to mix conditional and unconditional scores to approximately sample from conditional distributions. However, such methods do not approximate the intermediate distributions, necessitating additional 'corrector' steps. In this work, we provide an efficient and principled method for sampling from a sequence of annealed, geometric-averaged, or product distributions derived from pretrained score-based models. We derive a weighted simulation scheme which we call Feynman-Kac Correctors (FKCs) based on the celebrated Feynman-Kac formula by carefully accounting for terms in the appropriate partial differential equations (PDEs). To simulate these PDEs, we propose Sequential Monte Carlo (SMC) resampling algorithms that leverage inference-time scaling to improve sampling quality. We empirically demonstrate the utility of our methods by proposing amortized sampling via inference-time temperature annealing, improving multi-objective molecule generation using pretrained models, and improving classifier-free guidance for text-to-image generation. Our code is available at https://github.com/martaskrt/fkc-diffusion.