FantasyID: Face Knowledge Enhanced ID-Preserving Video Generation
Yunpeng Zhang, Qiang Wang, Fan Jiang, Yaqi Fan, Mu Xu, Yonggang Qi
2025-02-24
Summary
This paper talks about FantasyID, a new AI system that can create videos of people based on text descriptions while keeping their faces looking realistic and consistent throughout the video.
What's the problem?
Current AI systems that make videos of people from text descriptions often struggle to keep the person's face looking the same throughout the video while also making natural facial expressions and movements. They either make the face too static or change it too much, making it look unrealistic.
What's the solution?
The researchers created FantasyID, which uses advanced 3D face modeling and clever tricks to make more realistic videos. They taught the AI about how faces are structured in 3D and showed it faces from different angles. They also made a smart system that carefully adds this face information to the video-making process, balancing keeping the face consistent with making natural movements.
Why it matters?
This matters because it could lead to more realistic and personalized video content creation. It could be used in movies, video games, or virtual reality to create lifelike characters that look like specific people. This technology could change how we make visual content, making it easier to create custom videos without needing real actors for every scene.
Abstract
Tuning-free approaches adapting large-scale pre-trained video diffusion models for identity-preserving text-to-video generation (IPT2V) have gained popularity recently due to their efficacy and scalability. However, significant challenges remain to achieve satisfied facial dynamics while keeping the identity unchanged. In this work, we present a novel tuning-free IPT2V framework by enhancing face knowledge of the pre-trained video model built on diffusion transformers (DiT), dubbed FantasyID. Essentially, 3D facial geometry prior is incorporated to ensure plausible facial structures during video synthesis. To prevent the model from learning copy-paste shortcuts that simply replicate reference face across frames, a multi-view face augmentation strategy is devised to capture diverse 2D facial appearance features, hence increasing the dynamics over the facial expressions and head poses. Additionally, after blending the 2D and 3D features as guidance, instead of naively employing cross-attention to inject guidance cues into DiT layers, a learnable layer-aware adaptive mechanism is employed to selectively inject the fused features into each individual DiT layers, facilitating balanced modeling of identity preservation and motion dynamics. Experimental results validate our model's superiority over the current tuning-free IPT2V methods.