GaMO: Geometry-aware Multi-view Diffusion Outpainting for Sparse-View 3D Reconstruction
Yi-Chuan Huang, Hao-Jen Chien, Chin-Yang Lin, Ying-Huan Chen, Yu-Lun Liu
2026-01-01
Summary
This paper focuses on improving how we create 3D models of objects and scenes from a limited number of pictures. Current methods are really good when you have lots of photos, but struggle when you don't have enough viewpoints.
What's the problem?
Existing techniques for 3D reconstruction with few images have issues. They often don't fill in the scene completely, meaning parts are missing beyond what the original photos show. Also, the generated parts sometimes don't quite line up geometrically, creating inconsistencies. Finally, many of the best methods are very slow and require a lot of computing power.
What's the solution?
The researchers developed a new system called GaMO, which stands for Geometry-aware Multi-view Outpainter. Instead of trying to create entirely new views, GaMO expands the existing views – essentially zooming out from the photos you already have. This keeps everything geometrically correct and provides a wider view of the scene without needing to train a model beforehand. It uses clever techniques to fill in the expanded areas while staying consistent with the original images.
Why it matters?
GaMO is a significant step forward because it produces higher quality 3D models with fewer input images, and it does so much faster than previous methods. It’s about 25 times quicker than the best diffusion-based approaches, completing reconstructions in under 10 minutes. This makes it more practical for real-world applications where getting lots of photos isn't always possible or convenient.
Abstract
Recent advances in 3D reconstruction have achieved remarkable progress in high-quality scene capture from dense multi-view imagery, yet struggle when input views are limited. Various approaches, including regularization techniques, semantic priors, and geometric constraints, have been implemented to address this challenge. Latest diffusion-based methods have demonstrated substantial improvements by generating novel views from new camera poses to augment training data, surpassing earlier regularization and prior-based techniques. Despite this progress, we identify three critical limitations in these state-of-the-art approaches: inadequate coverage beyond known view peripheries, geometric inconsistencies across generated views, and computationally expensive pipelines. We introduce GaMO (Geometry-aware Multi-view Outpainter), a framework that reformulates sparse-view reconstruction through multi-view outpainting. Instead of generating new viewpoints, GaMO expands the field of view from existing camera poses, which inherently preserves geometric consistency while providing broader scene coverage. Our approach employs multi-view conditioning and geometry-aware denoising strategies in a zero-shot manner without training. Extensive experiments on Replica and ScanNet++ demonstrate state-of-the-art reconstruction quality across 3, 6, and 9 input views, outperforming prior methods in PSNR and LPIPS, while achieving a 25times speedup over SOTA diffusion-based methods with processing time under 10 minutes. Project page: https://yichuanh.github.io/GaMO/