RodinHD: High-Fidelity 3D Avatar Generation with Diffusion Models
Bowen Zhang, Yiji Cheng, Chunyu Wang, Ting Zhang, Jiaolong Yang, Yansong Tang, Feng Zhao, Dong Chen, Baining Guo
2024-07-10
Summary
This paper talks about RodinHD, a new system designed to create high-quality 3D avatars from 2D portrait images. It addresses the limitations of existing methods that struggle to capture fine details, such as hairstyles.
What's the problem?
The main problem is that current techniques for generating 3D avatars often miss important details, especially when it comes to features like hair and texture. Additionally, there is an issue known as catastrophic forgetting, which happens when the model forgets previously learned details while trying to learn new ones. This is particularly problematic when the model processes many avatars sequentially using a shared decoder.
What's the solution?
To solve these issues, the authors developed a new approach called RodinHD. They introduced a novel data scheduling strategy and a weight consolidation regularization term, which help the model retain important details while learning. They also improved how the model uses the portrait image by creating a detailed hierarchical representation that captures rich texture information. This information is injected into the 3D model at different layers using a method called cross-attention. By training on a dataset of 46,000 avatars with an optimized noise schedule, RodinHD can generate avatars with much sharper details than previous methods and can adapt to various portrait images.
Why it matters?
This research is important because it significantly enhances the ability to create realistic 3D avatars from simple 2D images. By improving detail capture and reducing forgetting, RodinHD opens up new possibilities for applications in gaming, virtual reality, and online avatars, making digital representations of people more lifelike and personalized.
Abstract
We present RodinHD, which can generate high-fidelity 3D avatars from a portrait image. Existing methods fail to capture intricate details such as hairstyles which we tackle in this paper. We first identify an overlooked problem of catastrophic forgetting that arises when fitting triplanes sequentially on many avatars, caused by the MLP decoder sharing scheme. To overcome this issue, we raise a novel data scheduling strategy and a weight consolidation regularization term, which improves the decoder's capability of rendering sharper details. Additionally, we optimize the guiding effect of the portrait image by computing a finer-grained hierarchical representation that captures rich 2D texture cues, and injecting them to the 3D diffusion model at multiple layers via cross-attention. When trained on 46K avatars with a noise schedule optimized for triplanes, the resulting model can generate 3D avatars with notably better details than previous methods and can generalize to in-the-wild portrait input.