Adaptive Blind All-in-One Image Restoration
David Serrano-Lozano, Luis Herranz, Shaolin Su, Javier Vazquez-Corral
2024-11-28

Summary
This paper introduces Adaptive Blind All-in-One Restoration (ABAIR), a new model that efficiently restores high-quality images from degraded inputs without needing to know the types of distortions beforehand.
What's the problem?
Current image restoration models often require knowing all possible types of image damage (like blurriness or noise) during training. This makes them less effective when they encounter new or unexpected types of damage, which is common in real-world situations. As a result, these models can struggle to produce good results when faced with unfamiliar distortions.
What's the solution?
The authors developed ABAIR, which can handle multiple types of image degradation and adapt to new ones without extensive retraining. They trained their model on a large dataset with various synthetic distortions and added a system that estimates the type of damage at the pixel level. This allows ABAIR to effectively combine different techniques for restoring images based on the specific issues present, improving its performance across different tasks.
Why it matters?
This research is important because it enhances the ability of AI to restore images in a flexible and efficient way. By making it easier to deal with various types of distortions, ABAIR can help improve image quality in many applications, such as photography, medical imaging, and video production, ultimately leading to better visual experiences.
Abstract
Blind all-in-one image restoration models aim to recover a high-quality image from an input degraded with unknown distortions. However, these models require all the possible degradation types to be defined during the training stage while showing limited generalization to unseen degradations, which limits their practical application in complex cases. In this paper, we propose a simple but effective adaptive blind all-in-one restoration (ABAIR) model, which can address multiple degradations, generalizes well to unseen degradations, and efficiently incorporate new degradations by training a small fraction of parameters. First, we train our baseline model on a large dataset of natural images with multiple synthetic degradations, augmented with a segmentation head to estimate per-pixel degradation types, resulting in a powerful backbone able to generalize to a wide range of degradations. Second, we adapt our baseline model to varying image restoration tasks using independent low-rank adapters. Third, we learn to adaptively combine adapters to versatile images via a flexible and lightweight degradation estimator. Our model is both powerful in handling specific distortions and flexible in adapting to complex tasks, it not only outperforms the state-of-the-art by a large margin on five- and three-task IR setups, but also shows improved generalization to unseen degradations and also composite distortions.