Loopy: Taming Audio-Driven Portrait Avatar with Long-Term Motion Dependency
Jianwen Jiang, Chao Liang, Jiaqi Yang, Gaojie Lin, Tianyun Zhong, Yanbo Zheng
2024-09-05

Summary
This paper talks about Loopy, a new model that generates realistic video avatars driven by audio, allowing for smooth and natural movements over time.
What's the problem?
Existing methods for creating videos from audio often struggle with controlling how characters move, leading to unnatural or jerky motions. Many techniques rely on additional signals to stabilize these movements, which can limit the flexibility and realism of the generated videos.
What's the solution?
Loopy introduces an end-to-end model that uses only audio to guide video generation. It includes special modules that help the model understand long-term motion patterns, making the movements of the avatar more natural and closely tied to the audio input. By removing the need for extra spatial signals that constrain movement, Loopy allows for greater freedom and creativity in video generation.
Why it matters?
This research is important because it enhances how we create animated characters in videos, making them more lifelike and responsive to audio cues. This can be useful in various applications, such as video games, animated films, and virtual reality experiences, where natural character movement is crucial for viewer engagement.
Abstract
With the introduction of diffusion-based video generation techniques, audio-conditioned human video generation has recently achieved significant breakthroughs in both the naturalness of motion and the synthesis of portrait details. Due to the limited control of audio signals in driving human motion, existing methods often add auxiliary spatial signals to stabilize movements, which may compromise the naturalness and freedom of motion. In this paper, we propose an end-to-end audio-only conditioned video diffusion model named Loopy. Specifically, we designed an inter- and intra-clip temporal module and an audio-to-latents module, enabling the model to leverage long-term motion information from the data to learn natural motion patterns and improving audio-portrait movement correlation. This method removes the need for manually specified spatial motion templates used in existing methods to constrain motion during inference. Extensive experiments show that Loopy outperforms recent audio-driven portrait diffusion models, delivering more lifelike and high-quality results across various scenarios.