LumiNet: Latent Intrinsics Meets Diffusion Models for Indoor Scene Relighting
Xiaoyan Xing, Konrad Groh, Sezer Karaoglu, Theo Gevers, Anand Bhattad
2024-12-05

Summary
This paper introduces LumiNet, a new system designed to change the lighting in indoor scenes by using advanced generative models and latent intrinsic representations.
What's the problem?
Changing the lighting in images of indoor scenes can be complicated because traditional methods often require detailed 3D models or multiple images from different angles. These methods can be time-consuming and may not produce realistic results, especially when trying to replicate complex lighting effects like reflections and shadows.
What's the solution?
LumiNet solves this problem by synthesizing a new version of a source image that captures the lighting from a target image without needing 3D models. It does this through two main contributions: first, it uses a data curation strategy from existing models to train effectively, and second, it employs a modified diffusion model that combines features from both the source and target images. This allows LumiNet to preserve important details from the source image while applying the desired lighting effects from the target image. The system also includes a learned adaptor that helps refine the lighting transfer process.
Why it matters?
This research is important because it simplifies the process of relighting indoor scenes, making it more accessible for applications like interior design, video production, and virtual reality. By enabling realistic lighting changes with just images as input, LumiNet can enhance visual storytelling and improve the quality of digital content.
Abstract
We introduce LumiNet, a novel architecture that leverages generative models and latent intrinsic representations for effective lighting transfer. Given a source image and a target lighting image, LumiNet synthesizes a relit version of the source scene that captures the target's lighting. Our approach makes two key contributions: a data curation strategy from the StyleGAN-based relighting model for our training, and a modified diffusion-based ControlNet that processes both latent intrinsic properties from the source image and latent extrinsic properties from the target image. We further improve lighting transfer through a learned adaptor (MLP) that injects the target's latent extrinsic properties via cross-attention and fine-tuning. Unlike traditional ControlNet, which generates images with conditional maps from a single scene, LumiNet processes latent representations from two different images - preserving geometry and albedo from the source while transferring lighting characteristics from the target. Experiments demonstrate that our method successfully transfers complex lighting phenomena including specular highlights and indirect illumination across scenes with varying spatial layouts and materials, outperforming existing approaches on challenging indoor scenes using only images as input.