< Explain other AI papers

Depth Anywhere: Enhancing 360 Monocular Depth Estimation via Perspective Distillation and Unlabeled Data Augmentation

Ning-Hsu Wang, Yu-Lun Liu

2024-06-19

Depth Anywhere: Enhancing 360 Monocular Depth Estimation via Perspective Distillation and Unlabeled Data Augmentation

Summary

This paper presents a new method for improving depth estimation in 360-degree images, which is important for applications like virtual reality and autonomous navigation. The authors introduce a framework that effectively uses unlabeled data to enhance the accuracy of depth measurements.

What's the problem?

Estimating depth accurately in 360-degree images is challenging because traditional methods designed for standard images do not work well with the unique distortions and projections of 360-degree views. Additionally, existing methods specifically for 360-degree images often struggle due to a lack of labeled data, which is necessary for training models to understand depth.

What's the solution?

To solve this problem, the authors propose a new framework that utilizes unlabeled 360-degree data by employing advanced perspective depth estimation models as 'teachers' to create pseudo labels. They use a technique that projects the 360-degree image onto a six-face cube, allowing the model to learn from these generated labels. The approach involves two main steps: first, generating masks to identify invalid regions in the images, and second, using a semi-supervised training method that combines both labeled and unlabeled data. They tested their method on well-known datasets and found significant improvements in depth estimation accuracy, especially in scenarios where no prior examples were provided (zero-shot learning).

Why it matters?

This research is important because it addresses a key limitation in depth estimation for 360-degree imagery. By effectively using unlabeled data and improving the training process, this framework can enhance technologies used in virtual reality, autonomous vehicles, and other immersive media applications. This advancement could lead to better user experiences and more reliable systems in these fields.

Abstract

Accurately estimating depth in 360-degree imagery is crucial for virtual reality, autonomous navigation, and immersive media applications. Existing depth estimation methods designed for perspective-view imagery fail when applied to 360-degree images due to different camera projections and distortions, whereas 360-degree methods perform inferior due to the lack of labeled data pairs. We propose a new depth estimation framework that utilizes unlabeled 360-degree data effectively. Our approach uses state-of-the-art perspective depth estimation models as teacher models to generate pseudo labels through a six-face cube projection technique, enabling efficient labeling of depth in 360-degree images. This method leverages the increasing availability of large datasets. Our approach includes two main stages: offline mask generation for invalid regions and an online semi-supervised joint training regime. We tested our approach on benchmark datasets such as Matterport3D and Stanford2D3D, showing significant improvements in depth estimation accuracy, particularly in zero-shot scenarios. Our proposed training pipeline can enhance any 360 monocular depth estimator and demonstrates effective knowledge transfer across different camera projections and data types. See our project page for results: https://albert100121.github.io/Depth-Anywhere/