< Explain other AI papers

DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D Diffusion

Yukun Huang, Jianan Wang, Ailing Zeng, Zheng-Jun Zha, Lei Zhang, Xihui Liu

2024-09-26

DreamWaltz-G: Expressive 3D Gaussian Avatars from Skeleton-Guided 2D Diffusion

Summary

This paper introduces DreamWaltz-G, a new system that creates expressive 3D avatars from 2D images using advanced techniques. It combines skeleton-guided methods and a unique way of representing 3D shapes to generate high-quality animated avatars.

What's the problem?

Creating realistic and animated 3D avatars from 2D images is challenging. Traditional methods often struggle to produce high-quality results, leading to issues like multiple faces or extra limbs in the avatars. These problems make it hard to create avatars that look and move naturally.

What's the solution?

The researchers developed DreamWaltz-G, which uses a technique called Skeleton-guided Score Distillation (SkelSD) to guide the generation process. This method integrates skeleton controls from 3D human models into the 2D image generation process, helping ensure that the final avatars accurately reflect human poses. Additionally, they introduced a Hybrid 3D Gaussian Avatar representation that combines different methods for better efficiency and quality, allowing for real-time rendering and expressive animations. Their experiments showed that DreamWaltz-G produces better quality avatars than existing methods.

Why it matters?

This research is important because it advances the technology for creating animated 3D avatars, which can be used in various applications like video games, virtual reality, and animated films. By improving how these avatars are generated, DreamWaltz-G opens up new possibilities for creating more lifelike characters that can interact in digital environments.

Abstract

Leveraging pretrained 2D diffusion models and score distillation sampling (SDS), recent methods have shown promising results for text-to-3D avatar generation. However, generating high-quality 3D avatars capable of expressive animation remains challenging. In this work, we present DreamWaltz-G, a novel learning framework for animatable 3D avatar generation from text. The core of this framework lies in Skeleton-guided Score Distillation and Hybrid 3D Gaussian Avatar representation. Specifically, the proposed skeleton-guided score distillation integrates skeleton controls from 3D human templates into 2D diffusion models, enhancing the consistency of SDS supervision in terms of view and human pose. This facilitates the generation of high-quality avatars, mitigating issues such as multiple faces, extra limbs, and blurring. The proposed hybrid 3D Gaussian avatar representation builds on the efficient 3D Gaussians, combining neural implicit fields and parameterized 3D meshes to enable real-time rendering, stable SDS optimization, and expressive animation. Extensive experiments demonstrate that DreamWaltz-G is highly effective in generating and animating 3D avatars, outperforming existing methods in both visual quality and animation expressiveness. Our framework further supports diverse applications, including human video reenactment and multi-subject scene composition.