< Explain other AI papers

MIDAS: Multimodal Interactive Digital-human Synthesis via Real-time Autoregressive Video Generation

Ming Chen, Liyuan Cui, Wenyuan Zhang, Haoxian Zhang, Yan Zhou, Xiaohan Li, Xiaoqiang Liu, Pengfei Wan

2025-08-28

MIDAS: Multimodal Interactive Digital-human Synthesis via Real-time Autoregressive Video Generation

Summary

This research focuses on creating realistic, interactive digital humans in video form, meaning videos of people you can actually have a conversation with or control in real-time.

What's the problem?

Currently, making these interactive digital humans is really hard. Existing methods are slow, require a lot of computing power, and don't give you very precise control over what the digital human does or says. It's difficult to get them to respond quickly and naturally to different kinds of input like speech, movements, or text commands.

What's the solution?

The researchers developed a new system that uses a large language model – the kind that powers chatbots – to generate the videos. They tweaked the language model to understand different types of input (audio, pose, text) and then use that information to create a coherent video. To make it faster and more efficient, they also created a way to compress the video data significantly, reducing the amount of processing needed. They trained this system using a huge dataset of about 20,000 hours of conversations.

Why it matters?

This work is important because it brings us closer to having truly interactive digital humans that can be used in many applications, like virtual assistants, realistic video games, or even helping people practice social skills. The improvements in speed and control make these digital humans much more practical and useful than previous attempts.

Abstract

Recently, interactive digital human video generation has attracted widespread attention and achieved remarkable progress. However, building such a practical system that can interact with diverse input signals in real time remains challenging to existing methods, which often struggle with high latency, heavy computational cost, and limited controllability. In this work, we introduce an autoregressive video generation framework that enables interactive multimodal control and low-latency extrapolation in a streaming manner. With minimal modifications to a standard large language model (LLM), our framework accepts multimodal condition encodings including audio, pose, and text, and outputs spatially and semantically coherent representations to guide the denoising process of a diffusion head. To support this, we construct a large-scale dialogue dataset of approximately 20,000 hours from multiple sources, providing rich conversational scenarios for training. We further introduce a deep compression autoencoder with up to 64times reduction ratio, which effectively alleviates the long-horizon inference burden of the autoregressive model. Extensive experiments on duplex conversation, multilingual human synthesis, and interactive world model highlight the advantages of our approach in low latency, high efficiency, and fine-grained multimodal controllability.