Hybrid-grained Feature Aggregation with Coarse-to-fine Language Guidance for Self-supervised Monocular Depth Estimation
Wenyao Zhang, Hongsi Liu, Bohan Li, Jiawei He, Zekun Qi, Yunnan Wang, Shengyang Zhao, Xinqiang Yu, Wenjun Zeng, Xin Jin
2025-10-13
Summary
This paper introduces a new method, Hybrid-depth, for improving how computers estimate the depth of objects in a scene using only a single camera image. It focuses on making depth estimation more accurate by incorporating a better understanding of what's visually present and its spatial arrangement.
What's the problem?
Current methods for estimating depth from a single image struggle because they don't fully grasp the meaning of what they're seeing or the relationships between different parts of the scene. They lack sufficient 'semantic-spatial knowledge,' meaning they don't connect *what* things are with *where* they are in a robust way, leading to inaccurate depth maps.
What's the solution?
Hybrid-depth tackles this by combining the strengths of two powerful image understanding models, CLIP and DINO. CLIP excels at understanding the overall meaning of an image, while DINO is good at recognizing local details and spatial relationships. The method first uses these models to extract features, guided by text descriptions to ensure the features relate to depth. Then, it refines these initial depth estimates by incorporating information about the camera's position and aligning the depth predictions with pixel-level language understanding. It's designed to be easily added to existing depth estimation systems.
Why it matters?
This research is important because more accurate depth estimation has many real-world applications, like helping self-driving cars 'see' the road and understand their surroundings. By significantly improving depth estimation, especially in challenging scenarios, Hybrid-depth can contribute to better performance in these and other computer vision tasks, such as creating bird's-eye views of scenes for autonomous navigation.
Abstract
Current self-supervised monocular depth estimation (MDE) approaches encounter performance limitations due to insufficient semantic-spatial knowledge extraction. To address this challenge, we propose Hybrid-depth, a novel framework that systematically integrates foundation models (e.g., CLIP and DINO) to extract visual priors and acquire sufficient contextual information for MDE. Our approach introduces a coarse-to-fine progressive learning framework: 1) Firstly, we aggregate multi-grained features from CLIP (global semantics) and DINO (local spatial details) under contrastive language guidance. A proxy task comparing close-distant image patches is designed to enforce depth-aware feature alignment using text prompts; 2) Next, building on the coarse features, we integrate camera pose information and pixel-wise language alignment to refine depth predictions. This module seamlessly integrates with existing self-supervised MDE pipelines (e.g., Monodepth2, ManyDepth) as a plug-and-play depth encoder, enhancing continuous depth estimation. By aggregating CLIP's semantic context and DINO's spatial details through language guidance, our method effectively addresses feature granularity mismatches. Extensive experiments on the KITTI benchmark demonstrate that our method significantly outperforms SOTA methods across all metrics, which also indeed benefits downstream tasks like BEV perception. Code is available at https://github.com/Zhangwenyao1/Hybrid-depth.