< Explain other AI papers

Generative Refocusing: Flexible Defocus Control from a Single Image

Chun-Wei Tuan Mu, Jia-Bin Huang, Yu-Lun Liu

2025-12-19

Generative Refocusing: Flexible Defocus Control from a Single Image

Summary

This paper presents a new way to digitally change the focus of a photo *after* it's been taken, and even customize the blurry background effect, known as bokeh.

What's the problem?

Traditionally, getting a photo perfectly in focus requires careful adjustments while shooting. Changing the focus or the blur effect later is difficult and existing computer programs struggle with it. They often need perfectly clear starting images, rely on unrealistic computer-generated images for training, or don't give you much control over *how* the blur looks.

What's the solution?

The researchers developed a two-part system called Generative Refocusing. First, a program called DeblurNet sharpens blurry images to create a fully focused version. Then, a program called BokehNet adds back in a realistic blurry background, and importantly, lets you control the style of that blur. The key to their success is a new training method that combines computer-generated images with *real* photos and information about the camera settings used to take them, allowing the system to learn what real blur looks like.

Why it matters?

This work is important because it makes it much easier to fix focus errors in photos and creatively control the aesthetic of blurry backgrounds. It opens the door to editing photos in ways that were previously very difficult or impossible, and allows for things like changing the blur based on text prompts or creating unique blur shapes.

Abstract

Depth-of-field control is essential in photography, but getting the perfect focus often takes several tries or special equipment. Single-image refocusing is still difficult. It involves recovering sharp content and creating realistic bokeh. Current methods have significant drawbacks. They need all-in-focus inputs, depend on synthetic data from simulators, and have limited control over aperture. We introduce Generative Refocusing, a two-step process that uses DeblurNet to recover all-in-focus images from various inputs and BokehNet for creating controllable bokeh. Our main innovation is semi-supervised training. This method combines synthetic paired data with unpaired real bokeh images, using EXIF metadata to capture real optical characteristics beyond what simulators can provide. Our experiments show we achieve top performance in defocus deblurring, bokeh synthesis, and refocusing benchmarks. Additionally, our Generative Refocusing allows text-guided adjustments and custom aperture shapes.