Character Mixing for Video Generation
Tingting Liao, Chongjian Ge, Guangyi Liu, Hao Li, Yi Zhou
2025-10-07
Summary
This paper explores how to create videos where characters from completely different shows or styles—like realistic people and cartoons—can interact with each other in a believable way, using artificial intelligence.
What's the problem?
The main challenge is that when you try to combine characters that have never met and exist in different visual styles, things often look weird. Characters might lose what makes them unique, or a realistic character might suddenly look cartoonish, and vice versa. It's hard to make the interaction feel natural and keep everyone looking like themselves.
What's the solution?
The researchers developed a system with two main parts. First, 'Cross-Character Embedding' helps the AI understand what each character *is* – their personality and how they usually act – by looking at different sources like images and videos. Second, 'Cross-Character Augmentation' creates fake examples of these characters interacting and in mixed styles to train the AI, so it learns how to handle these situations better. This allows the AI to generate videos where characters interact naturally without losing their original look and feel.
Why it matters?
This work is important because it opens up possibilities for new kinds of storytelling and video creation. Imagine being able to put any characters you want into a scene together and have them interact convincingly! It could lead to more creative and engaging content, and it's a step towards more powerful and flexible AI video generation.
Abstract
Imagine Mr. Bean stepping into Tom and Jerry--can we generate videos where characters interact naturally across different worlds? We study inter-character interaction in text-to-video generation, where the key challenge is to preserve each character's identity and behaviors while enabling coherent cross-context interaction. This is difficult because characters may never have coexisted and because mixing styles often causes style delusion, where realistic characters appear cartoonish or vice versa. We introduce a framework that tackles these issues with Cross-Character Embedding (CCE), which learns identity and behavioral logic across multimodal sources, and Cross-Character Augmentation (CCA), which enriches training with synthetic co-existence and mixed-style data. Together, these techniques allow natural interactions between previously uncoexistent characters without losing stylistic fidelity. Experiments on a curated benchmark of cartoons and live-action series with 10 characters show clear improvements in identity preservation, interaction quality, and robustness to style delusion, enabling new forms of generative storytelling.Additional results and videos are available on our project page: https://tingtingliao.github.io/mimix/.