< Explain other AI papers

VividFace: A Diffusion-Based Hybrid Framework for High-Fidelity Video Face Swapping

Hao Shao, Shulun Wang, Yang Zhou, Guanglu Song, Dailan He, Shuo Qin, Zhuofan Zong, Bingqi Ma, Yu Liu, Hongsheng Li

2024-12-17

VividFace: A Diffusion-Based Hybrid Framework for High-Fidelity Video Face Swapping

Summary

This paper introduces VividFace, a new method for swapping faces in videos that uses advanced techniques to ensure high-quality results while maintaining the original expressions and movements of the people involved.

What's the problem?

Existing face swapping methods mainly focus on still images and struggle with videos because they have to keep everything consistent over time. This means they often produce flickering or unnatural movements, especially when the faces change position or expression. Additionally, these methods can have trouble when there are obstructions or when the faces are at different angles.

What's the solution?

VividFace solves these problems by using a hybrid training approach that combines both static images and video data. It employs a special diffusion model that processes both types of data to maintain consistency in the video. The researchers also created a new dataset called the Attribute-Identity Disentanglement Triplet (AIDT) Dataset, which helps the model learn to separate different features of faces, such as identity and pose. By integrating 3D reconstruction techniques, VividFace can handle large variations in face angles and occlusions effectively.

Why it matters?

This work is significant because it represents a major advancement in video face swapping technology. By improving how faces are swapped in videos, VividFace can enhance applications in entertainment, film, and virtual reality, making it easier to create realistic content without losing the original character's expressions and movements.

Abstract

Video face swapping is becoming increasingly popular across various applications, yet existing methods primarily focus on static images and struggle with video face swapping because of temporal consistency and complex scenarios. In this paper, we present the first diffusion-based framework specifically designed for video face swapping. Our approach introduces a novel image-video hybrid training framework that leverages both abundant static image data and temporal video sequences, addressing the inherent limitations of video-only training. The framework incorporates a specially designed diffusion model coupled with a VidFaceVAE that effectively processes both types of data to better maintain temporal coherence of the generated videos. To further disentangle identity and pose features, we construct the Attribute-Identity Disentanglement Triplet (AIDT) Dataset, where each triplet has three face images, with two images sharing the same pose and two sharing the same identity. Enhanced with a comprehensive occlusion augmentation, this dataset also improves robustness against occlusions. Additionally, we integrate 3D reconstruction techniques as input conditioning to our network for handling large pose variations. Extensive experiments demonstrate that our framework achieves superior performance in identity preservation, temporal consistency, and visual quality compared to existing methods, while requiring fewer inference steps. Our approach effectively mitigates key challenges in video face swapping, including temporal flickering, identity preservation, and robustness to occlusions and pose variations.