Identity-Preserving Text-to-Video Generation by Frequency Decomposition
Shenghai Yuan, Jinfa Huang, Xianyi He, Yunyuan Ge, Yujun Shi, Liuhan Chen, Jiebo Luo, Li Yuan
2024-11-27

Summary
This paper discusses a new method called ConsisID for generating videos from text that maintains the same human identity throughout, using advanced techniques to ensure consistency and quality.
What's the problem?
Creating videos from text descriptions often leads to issues where the identity of characters changes or is not preserved across different frames. This inconsistency can make the videos look unrealistic and confusing, which is a significant challenge for existing video generation models.
What's the solution?
The authors propose ConsisID, a model that generates videos while keeping the same character identity intact. This model uses a method called frequency decomposition to separate facial features into low-frequency (overall shape) and high-frequency (fine details) components. It employs two types of extractors: one for capturing broad facial features and another for detailed facial characteristics. By combining these features effectively, ConsisID can produce high-quality videos that accurately reflect the identity of the characters throughout the video.
Why it matters?
This research is important because it improves the way AI can create videos from text, making them more realistic and coherent. By ensuring that characters maintain their identity, ConsisID can enhance applications in entertainment, education, and virtual reality, providing users with better and more engaging content.
Abstract
Identity-preserving text-to-video (IPT2V) generation aims to create high-fidelity videos with consistent human identity. It is an important task in video generation but remains an open problem for generative models. This paper pushes the technical frontier of IPT2V in two directions that have not been resolved in literature: (1) A tuning-free pipeline without tedious case-by-case finetuning, and (2) A frequency-aware heuristic identity-preserving DiT-based control scheme. We propose ConsisID, a tuning-free DiT-based controllable IPT2V model to keep human identity consistent in the generated video. Inspired by prior findings in frequency analysis of diffusion transformers, it employs identity-control signals in the frequency domain, where facial features can be decomposed into low-frequency global features and high-frequency intrinsic features. First, from a low-frequency perspective, we introduce a global facial extractor, which encodes reference images and facial key points into a latent space, generating features enriched with low-frequency information. These features are then integrated into shallow layers of the network to alleviate training challenges associated with DiT. Second, from a high-frequency perspective, we design a local facial extractor to capture high-frequency details and inject them into transformer blocks, enhancing the model's ability to preserve fine-grained features. We propose a hierarchical training strategy to leverage frequency information for identity preservation, transforming a vanilla pre-trained video generation model into an IPT2V model. Extensive experiments demonstrate that our frequency-aware heuristic scheme provides an optimal control solution for DiT-based models. Thanks to this scheme, our ConsisID generates high-quality, identity-preserving videos, making strides towards more effective IPT2V.