SpotLight: Shadow-Guided Object Relighting via Diffusion
Frédéric Fortier-Chouinard, Zitian Zhang, Louis-Etienne Messier, Mathieu Garon, Anand Bhattad, Jean-François Lalonde
2024-12-02

Summary
This paper presents SpotLight, a new method that allows for precise control of lighting when adding virtual objects to images by using shadows to guide the relighting process.
What's the problem?
When inserting virtual objects into real images, it can be difficult to make them look natural because the lighting might not match. Traditional methods often lack control over how light interacts with these objects, leading to unrealistic results. This makes it challenging to create images that look convincing and cohesive.
What's the solution?
SpotLight addresses this issue by allowing users to specify the desired shadows of the virtual object instead of manually adjusting all lighting settings. By injecting just the shadow information into a pre-trained neural rendering model, SpotLight can accurately shade the object based on where the light is supposed to come from. This method works without needing additional training and can be applied to existing neural rendering techniques, resulting in better integration of the object into the background image.
Why it matters?
This research is important because it enhances the ability to create realistic images with virtual objects, which is essential in fields like video game development, film production, and augmented reality. By improving how lighting is controlled in these scenarios, SpotLight can help artists and designers produce high-quality visuals that are more engaging and believable.
Abstract
Recent work has shown that diffusion models can be used as powerful neural rendering engines that can be leveraged for inserting virtual objects into images. Unlike typical physics-based renderers, however, neural rendering engines are limited by the lack of manual control over the lighting setup, which is often essential for improving or personalizing the desired image outcome. In this paper, we show that precise lighting control can be achieved for object relighting simply by specifying the desired shadows of the object. Rather surprisingly, we show that injecting only the shadow of the object into a pre-trained diffusion-based neural renderer enables it to accurately shade the object according to the desired light position, while properly harmonizing the object (and its shadow) within the target background image. Our method, SpotLight, leverages existing neural rendering approaches and achieves controllable relighting results with no additional training. Specifically, we demonstrate its use with two neural renderers from the recent literature. We show that SpotLight achieves superior object compositing results, both quantitatively and perceptually, as confirmed by a user study, outperforming existing diffusion-based models specifically designed for relighting.