< Explain other AI papers

Relightable Full-Body Gaussian Codec Avatars

Shaofei Wang, Tomas Simon, Igor Santesteban, Timur Bagautdinov, Junxuan Li, Vasu Agrawal, Fabian Prada, Shoou-I Yu, Pace Nalbone, Matt Gramlich, Roman Lubachersky, Chenglei Wu, Javier Romero, Jason Saragih, Michael Zollhoefer, Andreas Geiger, Siyu Tang, Shunsuke Saito

2025-01-27

Relightable Full-Body Gaussian Codec Avatars

Summary

This paper talks about a new way to create realistic 3D avatars of people that can be relit and moved around naturally. It's like making a digital version of a person that looks real under different lighting conditions and can move like a real person would.

What's the problem?

Creating realistic 3D avatars is tricky because when a person moves, the way light hits their body changes a lot. It's like trying to make a digital actor look right under stage lights while they're dancing. Current methods struggle to make the lighting look natural when the avatar moves in new ways or under different lighting conditions.

What's the solution?

The researchers came up with a clever way to handle this problem. They split the lighting effects into two parts: local changes (how light bounces off the skin) and non-local changes (like shadows cast by one body part on another). They use something called 'zonal harmonics' to handle the local lighting changes, which works better than older methods when the avatar moves. For the non-local changes, they created a special 'shadow network' that predicts how shadows will fall on the body. They also added a way to make shiny parts of the body, like eyes, look more realistic.

Why it matters?

This matters because it could make virtual reality and video games much more realistic. Imagine being able to see a lifelike version of yourself or your friends in a game, looking natural no matter how you move or what the lighting is like. It could also be used in movies to create realistic digital characters, or in virtual try-on experiences for clothing. By making digital humans look more real, this technology could make our interactions with digital worlds feel much more natural and immersive.

Abstract

We propose Relightable Full-Body Gaussian Codec Avatars, a new approach for modeling relightable full-body avatars with fine-grained details including face and hands. The unique challenge for relighting full-body avatars lies in the large deformations caused by body articulation and the resulting impact on appearance caused by light transport. Changes in body pose can dramatically change the orientation of body surfaces with respect to lights, resulting in both local appearance changes due to changes in local light transport functions, as well as non-local changes due to occlusion between body parts. To address this, we decompose the light transport into local and non-local effects. Local appearance changes are modeled using learnable zonal harmonics for diffuse radiance transfer. Unlike spherical harmonics, zonal harmonics are highly efficient to rotate under articulation. This allows us to learn diffuse radiance transfer in a local coordinate frame, which disentangles the local radiance transfer from the articulation of the body. To account for non-local appearance changes, we introduce a shadow network that predicts shadows given precomputed incoming irradiance on a base mesh. This facilitates the learning of non-local shadowing between the body parts. Finally, we use a deferred shading approach to model specular radiance transfer and better capture reflections and highlights such as eye glints. We demonstrate that our approach successfully models both the local and non-local light transport required for relightable full-body avatars, with a superior generalization ability under novel illumination conditions and unseen poses.