< Explain other AI papers

Towards Natural Image Matting in the Wild via Real-Scenario Prior

Ruihao Xia, Yu Liang, Peng-Tao Jiang, Hao Zhang, Qianru Sun, Yang Tang, Bo Li, Pan Zhou

2024-10-16

Towards Natural Image Matting in the Wild via Real-Scenario Prior

Summary

This paper discusses a new approach to image matting called COCO-Matting, which improves how we separate objects from their backgrounds in complex real-world images.

What's the problem?

Many existing models for image matting rely on synthetic data, which doesn't work well when applied to real-life situations. They struggle with complex scenes where objects overlap or are partially hidden, leading to poor results in practical applications.

What's the solution?

To solve this problem, the authors created a new dataset called COCO-Matting, which includes over 38,000 high-quality images with detailed labels for separating objects from backgrounds. They also developed a new model architecture called SEMat that enhances the matting process by using advanced techniques to better capture edges and transparency. This model is trained to understand both the overall scene and specific details, allowing it to produce more accurate results.

Why it matters?

This research is important because it provides tools and datasets that can help improve image editing and computer vision applications. By creating a more effective way to separate objects from backgrounds in images, COCO-Matting and SEMat can enhance technologies used in photography, video production, and augmented reality.

Abstract

Recent approaches attempt to adapt powerful interactive segmentation models, such as SAM, to interactive matting and fine-tune the models based on synthetic matting datasets. However, models trained on synthetic data fail to generalize to complex and occlusion scenes. We address this challenge by proposing a new matting dataset based on the COCO dataset, namely COCO-Matting. Specifically, the construction of our COCO-Matting includes accessory fusion and mask-to-matte, which selects real-world complex images from COCO and converts semantic segmentation masks to matting labels. The built COCO-Matting comprises an extensive collection of 38,251 human instance-level alpha mattes in complex natural scenarios. Furthermore, existing SAM-based matting methods extract intermediate features and masks from a frozen SAM and only train a lightweight matting decoder by end-to-end matting losses, which do not fully exploit the potential of the pre-trained SAM. Thus, we propose SEMat which revamps the network architecture and training objectives. For network architecture, the proposed feature-aligned transformer learns to extract fine-grained edge and transparency features. The proposed matte-aligned decoder aims to segment matting-specific objects and convert coarse masks into high-precision mattes. For training objectives, the proposed regularization and trimap loss aim to retain the prior from the pre-trained model and push the matting logits extracted from the mask decoder to contain trimap-based semantic information. Extensive experiments across seven diverse datasets demonstrate the superior performance of our method, proving its efficacy in interactive natural image matting. We open-source our code, models, and dataset at https://github.com/XiaRho/SEMat.