FramePainter: Endowing Interactive Image Editing with Video Diffusion Priors
Yabo Zhang, Xinpeng Zhou, Yihan Zeng, Hang Xu, Hui Li, Wangmeng Zuo
2025-01-15

Summary
This paper talks about FramePainter, a new AI tool that makes editing images easier and more intuitive. It uses knowledge from video processing to make image editing smarter and more efficient.
What's the problem?
Current image editing AI tools need a ton of training data and extra complicated parts to understand how things move and change in the real world. This makes them slow to develop and hard to use. They also struggle with making edits that look natural and consistent with the original image.
What's the solution?
The researchers created FramePainter, which thinks about image editing like it's making a very short video. It starts with a trained video AI and adds a simple way to understand user edits like drawings or clicks. They also came up with a clever trick called 'matching attention' to help the AI make edits that fit naturally with the original image, even for big changes.
Why it matters?
This matters because it could make photo editing software much smarter and easier to use. FramePainter can do impressive edits with less training than other AIs, which means it could be used in more places and on more devices. It's also really good at making edits that aren't common in real videos, like turning a clownfish into a shark shape. This could open up new creative possibilities for artists and designers, and make advanced image editing accessible to more people.
Abstract
Interactive image editing allows users to modify images through visual interaction operations such as drawing, clicking, and dragging. Existing methods construct such supervision signals from videos, as they capture how objects change with various physical interactions. However, these models are usually built upon text-to-image diffusion models, so necessitate (i) massive training samples and (ii) an additional reference encoder to learn real-world dynamics and visual consistency. In this paper, we reformulate this task as an image-to-video generation problem, so that inherit powerful video diffusion priors to reduce training costs and ensure temporal consistency. Specifically, we introduce FramePainter as an efficient instantiation of this formulation. Initialized with Stable Video Diffusion, it only uses a lightweight sparse control encoder to inject editing signals. Considering the limitations of temporal attention in handling large motion between two frames, we further propose matching attention to enlarge the receptive field while encouraging dense correspondence between edited and source image tokens. We highlight the effectiveness and efficiency of FramePainter across various of editing signals: it domainantly outperforms previous state-of-the-art methods with far less training data, achieving highly seamless and coherent editing of images, \eg, automatically adjust the reflection of the cup. Moreover, FramePainter also exhibits exceptional generalization in scenarios not present in real-world videos, \eg, transform the clownfish into shark-like shape. Our code will be available at https://github.com/YBYBZhang/FramePainter.