< Explain other AI papers

Light-A-Video: Training-free Video Relighting via Progressive Light Fusion

Yujie Zhou, Jiazi Bu, Pengyang Ling, Pan Zhang, Tong Wu, Qidong Huang, Jinsong Li, Xiaoyi Dong, Yuhang Zang, Yuhang Cao, Anyi Rao, Jiaqi Wang, Li Niu

2025-02-13

Light-A-Video: Training-free Video Relighting via Progressive Light
  Fusion

Summary

This paper talks about Light-A-Video, a new way to change the lighting in videos without needing to train complex AI models. It's like having a smart video editor that can adjust the lighting of an entire video smoothly and consistently.

What's the problem?

Changing the lighting in videos is hard because current methods either need a lot of computing power and special video examples to learn from, or they cause flickering when applied to each frame separately. It's like trying to repaint a movie scene-by-scene but ending up with colors that don't match between scenes.

What's the solution?

The researchers created Light-A-Video, which uses two clever tricks. First, it has a special attention system that looks at multiple frames at once to keep the background lighting stable. Second, it uses a method called Progressive Light Fusion that smoothly blends the original video with the new lighting, kind of like mixing paint colors gradually to get a smooth transition.

Why it matters?

This matters because it could make video editing much easier and more accessible. Filmmakers, YouTubers, or anyone making videos could change the mood or time of day in their footage without expensive equipment or complicated software. It could lead to more creative and professional-looking videos being made by people with less resources, potentially changing how we create and enjoy visual content.

Abstract

Recent advancements in image relighting models, driven by large-scale datasets and pre-trained diffusion models, have enabled the imposition of consistent lighting. However, video relighting still lags, primarily due to the excessive training costs and the scarcity of diverse, high-quality video relighting datasets. A simple application of image relighting models on a frame-by-frame basis leads to several issues: lighting source inconsistency and relighted appearance inconsistency, resulting in flickers in the generated videos. In this work, we propose Light-A-Video, a training-free approach to achieve temporally smooth video relighting. Adapted from image relighting models, Light-A-Video introduces two key techniques to enhance lighting consistency. First, we design a Consistent Light Attention (CLA) module, which enhances cross-frame interactions within the self-attention layers to stabilize the generation of the background lighting source. Second, leveraging the physical principle of light transport independence, we apply linear blending between the source video's appearance and the relighted appearance, using a Progressive Light Fusion (PLF) strategy to ensure smooth temporal transitions in illumination. Experiments show that Light-A-Video improves the temporal consistency of relighted video while maintaining the image quality, ensuring coherent lighting transitions across frames. Project page: https://bujiazi.github.io/light-a-video.github.io/.