RRM: Relightable assets using Radiance guided Material extraction
Diego Gomez, Julien Philip, Adrien Kaiser, Élie Michel
2024-07-15

Summary
This paper introduces RRM, a method that improves the extraction of materials and lighting from 3D scenes, allowing for realistic rendering of images under different lighting conditions.
What's the problem?
Creating realistic 3D images under various lighting conditions is challenging, especially when dealing with shiny or reflective surfaces. Existing methods often struggle with these types of scenes and can produce poor results, limiting their effectiveness in real-world applications.
What's the solution?
RRM (Relightable assets using Radiance guided Material extraction) addresses these challenges by using a new approach that combines a detailed representation of light in a scene with a structure that captures how materials reflect light. This method allows for better extraction of the materials and lighting information, even in complex scenes with highly reflective objects. The researchers demonstrated that RRM performs better than previous techniques in retrieving material parameters, leading to more accurate relighting and the ability to create new views of the scene.
Why it matters?
This research is significant because it enhances the ability to create high-quality 3D graphics, which are essential in fields like video game design, virtual reality, and film production. By improving how we extract and manipulate lighting and materials in 3D environments, RRM can help artists and developers create more realistic and visually appealing content.
Abstract
Synthesizing NeRFs under arbitrary lighting has become a seminal problem in the last few years. Recent efforts tackle the problem via the extraction of physically-based parameters that can then be rendered under arbitrary lighting, but they are limited in the range of scenes they can handle, usually mishandling glossy scenes. We propose RRM, a method that can extract the materials, geometry, and environment lighting of a scene even in the presence of highly reflective objects. Our method consists of a physically-aware radiance field representation that informs physically-based parameters, and an expressive environment light structure based on a Laplacian Pyramid. We demonstrate that our contributions outperform the state-of-the-art on parameter retrieval tasks, leading to high-fidelity relighting and novel view synthesis on surfacic scenes.