Lotus-2: Advancing Geometric Dense Prediction with Powerful Image Generative Model
Jing He, Haodong Li, Mingzhi Sheng, Ying-Cong Chen
2025-12-02
Summary
This paper introduces a new method, Lotus-2, for figuring out the 3D shape and geometry of objects from a single 2D image. It leverages the power of diffusion models, which are good at understanding the world, but adapts them to give precise, stable geometric predictions instead of just creating new images.
What's the problem?
Determining the 3D structure of a scene from a single image is a really hard problem. This is because many different 3D shapes can look the same in a 2D picture, and it's difficult for computers to understand what's actually there. Existing methods either need huge amounts of training data or don't really 'understand' the physics of the world, limiting their accuracy.
What's the solution?
Lotus-2 tackles this in two steps. First, it creates a basic, globally consistent 3D structure using a simple prediction method and a technique to avoid weird grid-like errors. Then, it refines this structure with a more detailed process that smoothly adjusts the shape, staying within the boundaries set by the first step, to create a very accurate and detailed geometric representation. Importantly, it achieves this with a surprisingly small amount of training data.
Why it matters?
This work shows that diffusion models, which are usually used for generating images, can also be used for precise 3D reconstruction. It opens up possibilities for better computer vision systems that can understand the 3D world around us with less data and more accuracy than previous methods, moving beyond traditional approaches to geometric reasoning.
Abstract
Recovering pixel-wise geometric properties from a single image is fundamentally ill-posed due to appearance ambiguity and non-injective mappings between 2D observations and 3D structures. While discriminative regression models achieve strong performance through large-scale supervision, their success is bounded by the scale, quality and diversity of available data and limited physical reasoning. Recent diffusion models exhibit powerful world priors that encode geometry and semantics learned from massive image-text data, yet directly reusing their stochastic generative formulation is suboptimal for deterministic geometric inference: the former is optimized for diverse and high-fidelity image generation, whereas the latter requires stable and accurate predictions. In this work, we propose Lotus-2, a two-stage deterministic framework for stable, accurate and fine-grained geometric dense prediction, aiming to provide an optimal adaption protocol to fully exploit the pre-trained generative priors. Specifically, in the first stage, the core predictor employs a single-step deterministic formulation with a clean-data objective and a lightweight local continuity module (LCM) to generate globally coherent structures without grid artifacts. In the second stage, the detail sharpener performs a constrained multi-step rectified-flow refinement within the manifold defined by the core predictor, enhancing fine-grained geometry through noise-free deterministic flow matching. Using only 59K training samples, less than 1% of existing large-scale datasets, Lotus-2 establishes new state-of-the-art results in monocular depth estimation and highly competitive surface normal prediction. These results demonstrate that diffusion models can serve as deterministic world priors, enabling high-quality geometric reasoning beyond traditional discriminative and generative paradigms.