< Explain other AI papers

DVD: Deterministic Video Depth Estimation with Generative Priors

Hongfei Zhang, Harold Haodong Chen, Chenfei Liao, Jing He, Zixin Zhang, Haodong Li, Yihao Liang, Kanghao Chen, Bin Ren, Xu Zheng, Shuai Yang, Kun Zhou, Yinchuan Li, Nicu Sebe, Ying-Cong Chen

2026-03-13

DVD: Deterministic Video Depth Estimation with Generative Priors

Summary

This paper introduces a new method, DVD, for estimating depth in videos, which is how computers 'see' distance in a scene. It aims to improve upon existing techniques that either create unrealistic depth maps or require huge amounts of training data.

What's the problem?

Current methods for estimating depth in videos have significant drawbacks. Approaches that *generate* depth often create 'hallucinations' – meaning they invent details that aren't really there, and the scale of the depth can be inconsistent. On the other hand, methods that *discriminate* (or classify) depth require massive datasets with labeled examples to understand what different objects should look like at different distances, which is expensive and time-consuming to create.

What's the solution?

DVD solves this by cleverly adapting existing video models, specifically diffusion models, to directly *predict* depth in a single step. It does this in three key ways: first, it uses the internal timing of the diffusion model to keep the overall depth consistent while still capturing fine details. Second, it sharpens the predicted depth maps to avoid blurring and maintain clear object boundaries. Finally, it ensures that the depth estimates remain consistent across different parts of the video, allowing it to process long videos smoothly without needing complex alignment techniques.

Why it matters?

This work is important because it achieves better results than previous methods without needing as much labeled data – in fact, it uses significantly less. It unlocks the potential of powerful, pre-trained video models for depth estimation, making it easier and more efficient to create accurate depth maps. The researchers are also sharing their code, which will help other researchers build upon their work and advance the field.

Abstract

Existing video depth estimation faces a fundamental trade-off: generative models suffer from stochastic geometric hallucinations and scale drift, while discriminative models demand massive labeled datasets to resolve semantic ambiguities. To break this impasse, we present DVD, the first framework to deterministically adapt pre-trained video diffusion models into single-pass depth regressors. Specifically, DVD features three core designs: (i) repurposing the diffusion timestep as a structural anchor to balance global stability with high-frequency details; (ii) latent manifold rectification (LMR) to mitigate regression-induced over-smoothing, enforcing differential constraints to restore sharp boundaries and coherent motion; and (iii) global affine coherence, an inherent property bounding inter-window divergence, which enables seamless long-video inference without requiring complex temporal alignment. Extensive experiments demonstrate that DVD achieves state-of-the-art zero-shot performance across benchmarks. Furthermore, DVD successfully unlocks the profound geometric priors implicit in video foundation models using 163x less task-specific data than leading baselines. Notably, we fully release our pipeline, providing the whole training suite for SOTA video depth estimation to benefit the open-source community.