Avatar Forcing: Real-Time Interactive Head Avatar Generation for Natural Conversation
Taekyung Ki, Sangwon Jang, Jaehyeong Jo, Jaehong Yoon, Sung Ju Hwang
2026-01-05
Summary
This paper focuses on making talking head avatars – those digital faces you see in videos or virtual meetings – more realistic and responsive during conversations.
What's the problem?
Current talking head avatars often feel one-sided and don't truly react to the person they're 'talking' with. They struggle to respond in real-time and lack natural emotional expressions, making interactions feel unnatural. The main issues are creating a quick response time and teaching the avatar to react expressively without needing a ton of example data showing how to do so.
What's the solution?
The researchers developed a new system called 'Avatar Forcing'. This system uses a technique called 'diffusion forcing' to allow the avatar to process what the user is saying and doing – including their voice and head movements – almost instantly. It also uses a clever trick where it creates 'bad' examples of reactions and learns from those to improve its expressiveness, all without needing someone to manually label what a good reaction looks like. Essentially, it learns by seeing what *not* to do.
Why it matters?
This work is important because it brings us closer to having truly interactive and engaging virtual avatars. Faster response times and more natural expressions will make virtual communication feel more personal and effective, which is useful for things like video conferencing, creating virtual assistants, and even entertainment.
Abstract
Talking head generation creates lifelike avatars from static portraits for virtual communication and content creation. However, current models do not yet convey the feeling of truly interactive communication, often generating one-way responses that lack emotional engagement. We identify two key challenges toward truly interactive avatars: generating motion in real-time under causal constraints and learning expressive, vibrant reactions without additional labeled data. To address these challenges, we propose Avatar Forcing, a new framework for interactive head avatar generation that models real-time user-avatar interactions through diffusion forcing. This design allows the avatar to process real-time multimodal inputs, including the user's audio and motion, with low latency for instant reactions to both verbal and non-verbal cues such as speech, nods, and laughter. Furthermore, we introduce a direct preference optimization method that leverages synthetic losing samples constructed by dropping user conditions, enabling label-free learning of expressive interaction. Experimental results demonstrate that our framework enables real-time interaction with low latency (approximately 500ms), achieving 6.8X speedup compared to the baseline, and produces reactive and expressive avatar motion, which is preferred over 80% against the baseline.