< Explain other AI papers

Subsurface Scattering for 3D Gaussian Splatting

Jan-Niklas Dihlmann, Arjun Majumdar, Andreas Engelhardt, Raphael Braun, Hendrik P. A. Lensch

2024-08-23

Subsurface Scattering for 3D Gaussian Splatting

Summary

This paper discusses a new method for creating realistic 3D images of objects made from materials that scatter light, using a technique called subsurface scattering.

What's the problem?

When trying to recreate 3D objects that are made from materials like skin or wax, it’s challenging because light behaves differently under the surface. Traditional methods struggle to accurately represent how light travels through these materials, leading to less realistic images and longer processing times.

What's the solution?

The authors propose a framework that improves how we model these objects by combining two approaches. First, they break down the object into a clear surface representation using 3D Gaussians, which are mathematical shapes that help approximate the object's surface. Then, they also create an implicit volumetric representation to capture how light scatters inside the material. This system allows for better control over how the material looks and behaves under different lighting conditions, enabling faster and more accurate rendering of images.

Why it matters?

This research is important because it enhances the ability to create realistic 3D visuals for various applications, such as video games and movies. By improving how we simulate light behavior in materials, creators can produce higher-quality graphics more efficiently, which can lead to better user experiences in multimedia.

Abstract

3D reconstruction and relighting of objects made from scattering materials present a significant challenge due to the complex light transport beneath the surface. 3D Gaussian Splatting introduced high-quality novel view synthesis at real-time speeds. While 3D Gaussians efficiently approximate an object's surface, they fail to capture the volumetric properties of subsurface scattering. We propose a framework for optimizing an object's shape together with the radiance transfer field given multi-view OLAT (one light at a time) data. Our method decomposes the scene into an explicit surface represented as 3D Gaussians, with a spatially varying BRDF, and an implicit volumetric representation of the scattering component. A learned incident light field accounts for shadowing. We optimize all parameters jointly via ray-traced differentiable rendering. Our approach enables material editing, relighting and novel view synthesis at interactive rates. We show successful application on synthetic data and introduce a newly acquired multi-view multi-light dataset of objects in a light-stage setup. Compared to previous work we achieve comparable or better results at a fraction of optimization and rendering time while enabling detailed control over material attributes. Project page https://sss.jdihlmann.com/