< Explain other AI papers

DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation

Hongfei Zhang, Kanghao Chen, Zixin Zhang, Harold Haodong Chen, Yuanhuiyi Lyu, Yuqi Zhang, Shuai Yang, Kun Zhou, Yingcong Chen

2025-12-03

DualCamCtrl: Dual-Branch Diffusion Model for Geometry-Aware Camera-Controlled Video Generation

Summary

This paper introduces a new system called DualCamCtrl that creates videos based on specific camera movements, using a type of artificial intelligence called a diffusion model.

What's the problem?

Existing methods for generating videos with controlled camera movements often struggle to create videos that look realistic and consistent because they don't fully understand the 3D structure of the scene. They might get the camera path right, but the objects in the video don't quite fit together properly or appear distorted when viewed from different angles.

What's the solution?

DualCamCtrl solves this by generating both the color images (RGB) and depth information (how far away things are) of the video at the same time, using two interconnected parts. It then uses a special technique called Semantic Guided Mutual Alignment, or SIGMA, to make sure the color and depth images match up and make sense together. This helps the AI better understand the scene's geometry and create more realistic videos that accurately follow the camera's path. The researchers also figured out that different stages of the AI process are good at different things – early stages build the overall shape, while later stages add fine details.

Why it matters?

This research is important because it significantly improves the quality and realism of videos generated with controlled camera movements. It reduces errors in how the camera moves through the scene by over 40% compared to previous methods, meaning the videos look much more natural and believable. This has potential applications in creating special effects, virtual reality experiences, and even training simulations.

Abstract

This paper presents DualCamCtrl, a novel end-to-end diffusion model for camera-controlled video generation. Recent works have advanced this field by representing camera poses as ray-based conditions, yet they often lack sufficient scene understanding and geometric awareness. DualCamCtrl specifically targets this limitation by introducing a dual-branch framework that mutually generates camera-consistent RGB and depth sequences. To harmonize these two modalities, we further propose the Semantic Guided Mutual Alignment (SIGMA) mechanism, which performs RGB-depth fusion in a semantics-guided and mutually reinforced manner. These designs collectively enable DualCamCtrl to better disentangle appearance and geometry modeling, generating videos that more faithfully adhere to the specified camera trajectories. Additionally, we analyze and reveal the distinct influence of depth and camera poses across denoising stages and further demonstrate that early and late stages play complementary roles in forming global structure and refining local details. Extensive experiments demonstrate that DualCamCtrl achieves more consistent camera-controlled video generation, with over 40\% reduction in camera motion errors compared with prior methods. Our project page: https://soyouthinkyoucantell.github.io/dualcamctrl-page/