AvatarArtist: Open-Domain 4D Avatarization
Hongyu Liu, Xuan Wang, Ziyu Wan, Yue Ma, Jingye Chen, Yanbo Fan, Yujun Shen, Yibing Song, Qifeng Chen
2025-04-01
Summary
This paper is about creating realistic 4D avatars from a single picture, allowing the avatar to be in any style you want.
What's the problem?
It's difficult to create 4D avatars that look good and can adapt to different styles using existing methods.
What's the solution?
The researchers developed a new approach called AvatarArtist that combines different AI techniques to create high-quality 4D avatars from just a single image.
Why it matters?
This work matters because it can make it easier to create personalized and realistic avatars for virtual reality, games, and other applications.
Abstract
This work focuses on open-domain 4D avatarization, with the purpose of creating a 4D avatar from a portrait image in an arbitrary style. We select parametric triplanes as the intermediate 4D representation and propose a practical training paradigm that takes advantage of both generative adversarial networks (GANs) and diffusion models. Our design stems from the observation that 4D GANs excel at bridging images and triplanes without supervision yet usually face challenges in handling diverse data distributions. A robust 2D diffusion prior emerges as the solution, assisting the GAN in transferring its expertise across various domains. The synergy between these experts permits the construction of a multi-domain image-triplane dataset, which drives the development of a general 4D avatar creator. Extensive experiments suggest that our model, AvatarArtist, is capable of producing high-quality 4D avatars with strong robustness to various source image domains. The code, the data, and the models will be made publicly available to facilitate future studies..