DisPose: Disentangling Pose Guidance for Controllable Human Image Animation
Hongxiang Li, Yaowei Li, Yuhang Yang, Junjie Cao, Zhihong Zhu, Xuxin Cheng, Long Chen
2024-12-13

Summary
This paper discusses DisPose, a new method for animating human images in videos while allowing for precise control over how the characters move and behave.
What's the problem?
Creating animations from static images can be difficult because traditional methods often rely on complex inputs that can lead to poor quality animations, especially when the character's body shape is different from the source video. This can result in animations that look unnatural or inconsistent.
What's the solution?
DisPose improves this process by breaking down the guidance provided by the skeleton pose into two parts: motion field guidance and keypoint correspondence. It generates a detailed motion field from a simple skeleton pose, which helps guide the animation more effectively without needing extra complex inputs. The method also uses features from the reference image to maintain the character's identity during the animation. This approach allows for better integration with existing animation models while enhancing video quality and consistency.
Why it matters?
This research is significant because it allows for more realistic and controlled human animations in videos, which can be useful in entertainment, gaming, and virtual reality applications. By improving how we animate characters, DisPose can help create more engaging and lifelike experiences for viewers.
Abstract
Controllable human image animation aims to generate videos from reference images using driving videos. Due to the limited control signals provided by sparse guidance (e.g., skeleton pose), recent works have attempted to introduce additional dense conditions (e.g., depth map) to ensure motion alignment. However, such strict dense guidance impairs the quality of the generated video when the body shape of the reference character differs significantly from that of the driving video. In this paper, we present DisPose to mine more generalizable and effective control signals without additional dense input, which disentangles the sparse skeleton pose in human image animation into motion field guidance and keypoint correspondence. Specifically, we generate a dense motion field from a sparse motion field and the reference image, which provides region-level dense guidance while maintaining the generalization of the sparse pose control. We also extract diffusion features corresponding to pose keypoints from the reference image, and then these point features are transferred to the target pose to provide distinct identity information. To seamlessly integrate into existing models, we propose a plug-and-play hybrid ControlNet that improves the quality and consistency of generated videos while freezing the existing model parameters. Extensive qualitative and quantitative experiments demonstrate the superiority of DisPose compared to current methods. Code: https://github.com/lihxxx/DisPose{https://github.com/lihxxx/DisPose}.