GenCompositor: Generative Video Compositing with Diffusion Transformer
Shuzhou Yang, Xiaoyu Li, Xiaodong Cun, Guangzhi Wang, Lingen Li, Ying Shan, Jian Zhang
2025-09-03
Summary
This paper introduces a new way to combine videos using artificial intelligence, specifically generative models. It's about automatically adding elements from one video into another, making it look realistic and allowing for easy customization.
What's the problem?
Traditionally, combining videos – like adding a person into a scene or changing a background – is a really time-consuming and expensive process. It requires skilled artists and a lot of manual work, leading to long production times for movies and videos.
What's the solution?
The researchers developed a system called Diffusion Transformer (DiT) to automate this video compositing. They made improvements to the DiT model to ensure the original video looks consistent after adding new elements. They also created a way for the AI to understand how to blend the foreground and background videos together based on user instructions, like where to place the new element and how it should move. To make this possible, they also created a large dataset of videos specifically for training this AI.
Why it matters?
This research is important because it could significantly speed up and lower the cost of video production. It allows for more creative control and makes it easier for people without specialized skills to create high-quality video content. Essentially, it brings the power of visual effects to a wider audience.
Abstract
Video compositing combines live-action footage to create video production, serving as a crucial technique in video creation and film production. Traditional pipelines require intensive labor efforts and expert collaboration, resulting in lengthy production cycles and high manpower costs. To address this issue, we automate this process with generative models, called generative video compositing. This new task strives to adaptively inject identity and motion information of foreground video to the target video in an interactive manner, allowing users to customize the size, motion trajectory, and other attributes of the dynamic elements added in final video. Specifically, we designed a novel Diffusion Transformer (DiT) pipeline based on its intrinsic properties. To maintain consistency of the target video before and after editing, we revised a light-weight DiT-based background preservation branch with masked token injection. As to inherit dynamic elements from other sources, a DiT fusion block is proposed using full self-attention, along with a simple yet effective foreground augmentation for training. Besides, for fusing background and foreground videos with different layouts based on user control, we developed a novel position embedding, named Extended Rotary Position Embedding (ERoPE). Finally, we curated a dataset comprising 61K sets of videos for our new task, called VideoComp. This data includes complete dynamic elements and high-quality target videos. Experiments demonstrate that our method effectively realizes generative video compositing, outperforming existing possible solutions in fidelity and consistency.