< Explain other AI papers

BlurDM: A Blur Diffusion Model for Image Deblurring

Jin-Ting He, Fu-Jen Tsai, Yan-Tsung Peng, Min-Hung Chen, Chia-Wen Lin, Yen-Yu Lin

2025-12-04

BlurDM: A Blur Diffusion Model for Image Deblurring

Summary

This paper introduces a new approach to removing blur from images, specifically focusing on motion blur caused by movement during the picture-taking process.

What's the problem?

Current methods for deblurring images using diffusion models don't fully understand *how* blur actually happens. They treat it like just another type of noise, instead of recognizing it's caused by a continuous exposure to light while something is moving. This limits how well they can restore a sharp image.

What's the solution?

The researchers created a 'Blur Diffusion Model' (BlurDM) that directly incorporates the physics of motion blur into the deblurring process. Imagine starting with a clear image, then gradually adding both noise *and* blur to it. BlurDM learns to reverse this process, simultaneously removing noise and undoing the blur. To make it work efficiently, they apply this process in a compressed 'latent space' which makes the calculations faster and more flexible. Essentially, it's a smarter way to guess what the original, sharp image looked like.

Why it matters?

This new method significantly improves the performance of existing deblurring techniques. By understanding and modeling the blur itself, rather than just treating it as noise, BlurDM consistently produces sharper and clearer images across several standard tests, meaning better quality photos and videos for everyone.

Abstract

Diffusion models show promise for dynamic scene deblurring; however, existing studies often fail to leverage the intrinsic nature of the blurring process within diffusion models, limiting their full potential. To address it, we present a Blur Diffusion Model (BlurDM), which seamlessly integrates the blur formation process into diffusion for image deblurring. Observing that motion blur stems from continuous exposure, BlurDM implicitly models the blur formation process through a dual-diffusion forward scheme, diffusing both noise and blur onto a sharp image. During the reverse generation process, we derive a dual denoising and deblurring formulation, enabling BlurDM to recover the sharp image by simultaneously denoising and deblurring, given pure Gaussian noise conditioned on the blurred image as input. Additionally, to efficiently integrate BlurDM into deblurring networks, we perform BlurDM in the latent space, forming a flexible prior generation network for deblurring. Extensive experiments demonstrate that BlurDM significantly and consistently enhances existing deblurring methods on four benchmark datasets. The source code is available at https://github.com/Jin-Ting-He/BlurDM.