Flex3D: Feed-Forward 3D Generation With Flexible Reconstruction Model And Input View Curation
Junlin Han, Jianyuan Wang, Andrea Vedaldi, Philip Torr, Filippos Kokkinos
2024-10-02

Summary
This paper introduces Flex3D, a new method for creating high-quality 3D images from text or single images, using a flexible approach to improve the generation process.
What's the problem?
Generating detailed 3D content from simple inputs like text or single images is challenging. Most existing methods use a fixed number of views to create 3D models, which can lead to poor quality if those views aren't diverse or high-quality. This limits the ability to capture different angles and details of the object being modeled.
What's the solution?
Flex3D addresses these issues by using a two-stage framework. First, it generates a variety of candidate views of the object using advanced models, and then it selects the best views based on their quality. In the second stage, these selected views are processed by a Flexible Reconstruction Model (FlexRM), which can handle any number of inputs to create detailed 3D representations. This method allows for better flexibility and quality in generating 3D content.
Why it matters?
This research is important because it improves how we create 3D models, making it easier and more efficient to generate high-quality images for applications like video games, virtual reality, and product design. By allowing for more diverse input views, Flex3D can produce more accurate and realistic 3D representations.
Abstract
Generating high-quality 3D content from text, single images, or sparse view images remains a challenging task with broad applications.Existing methods typically employ multi-view diffusion models to synthesize multi-view images, followed by a feed-forward process for 3D reconstruction. However, these approaches are often constrained by a small and fixed number of input views, limiting their ability to capture diverse viewpoints and, even worse, leading to suboptimal generation results if the synthesized views are of poor quality. To address these limitations, we propose Flex3D, a novel two-stage framework capable of leveraging an arbitrary number of high-quality input views. The first stage consists of a candidate view generation and curation pipeline. We employ a fine-tuned multi-view image diffusion model and a video diffusion model to generate a pool of candidate views, enabling a rich representation of the target 3D object. Subsequently, a view selection pipeline filters these views based on quality and consistency, ensuring that only the high-quality and reliable views are used for reconstruction. In the second stage, the curated views are fed into a Flexible Reconstruction Model (FlexRM), built upon a transformer architecture that can effectively process an arbitrary number of inputs. FlemRM directly outputs 3D Gaussian points leveraging a tri-plane representation, enabling efficient and detailed 3D generation. Through extensive exploration of design and training strategies, we optimize FlexRM to achieve superior performance in both reconstruction and generation tasks. Our results demonstrate that Flex3D achieves state-of-the-art performance, with a user study winning rate of over 92% in 3D generation tasks when compared to several of the latest feed-forward 3D generative models.