< Explain other AI papers

StdGEN: Semantic-Decomposed 3D Character Generation from Single Images

Yuze He, Yanning Zhou, Wang Zhao, Zhongkai Wu, Kaiwen Xiao, Wei Yang, Yong-Jin Liu, Xiao Han

2024-11-11

StdGEN: Semantic-Decomposed 3D Character Generation from Single Images

Summary

This paper introduces StdGEN, a new system for creating high-quality 3D characters from single images, making it easier to generate detailed characters for use in video games, virtual reality, and movies.

What's the problem?

Previous methods for generating 3D characters often faced issues like poor quality, limited detail, and long processing times. They struggled to break down the character into its different parts (like body, clothes, and hair) effectively, which made it hard to create complex and customizable characters quickly.

What's the solution?

StdGEN solves these problems by using a special model called the Semantic-aware Large Reconstruction Model (S-LRM). This model can analyze multiple views of an image to create detailed 3D characters with separate parts. It generates these characters quickly—in about three minutes—by reconstructing the geometry, color, and details of the character all at once. The system also includes advanced techniques for refining the character's surfaces to ensure high quality.

Why it matters?

This research is important because it allows creators in gaming, animation, and virtual reality to generate detailed and customizable 3D characters more efficiently. By improving the speed and quality of character generation, StdGEN can enhance the creative process and lead to richer experiences in digital media.

Abstract

We present StdGEN, an innovative pipeline for generating semantically decomposed high-quality 3D characters from single images, enabling broad applications in virtual reality, gaming, and filmmaking, etc. Unlike previous methods which struggle with limited decomposability, unsatisfactory quality, and long optimization times, StdGEN features decomposability, effectiveness and efficiency; i.e., it generates intricately detailed 3D characters with separated semantic components such as the body, clothes, and hair, in three minutes. At the core of StdGEN is our proposed Semantic-aware Large Reconstruction Model (S-LRM), a transformer-based generalizable model that jointly reconstructs geometry, color and semantics from multi-view images in a feed-forward manner. A differentiable multi-layer semantic surface extraction scheme is introduced to acquire meshes from hybrid implicit fields reconstructed by our S-LRM. Additionally, a specialized efficient multi-view diffusion model and an iterative multi-layer surface refinement module are integrated into the pipeline to facilitate high-quality, decomposable 3D character generation. Extensive experiments demonstrate our state-of-the-art performance in 3D anime character generation, surpassing existing baselines by a significant margin in geometry, texture and decomposability. StdGEN offers ready-to-use semantic-decomposed 3D characters and enables flexible customization for a wide range of applications. Project page: https://stdgen.github.io