< Explain other AI papers

ThermalNeRF: Thermal Radiance Fields

Yvette Y. Lin, Xin-Yi Pan, Sara Fridovich-Keil, Gordon Wetzstein

2024-07-23

ThermalNeRF: Thermal Radiance Fields

Summary

This paper introduces ThermalNeRF, a new technique for creating 3D models using thermal and visible light images. It allows for better visualization of heat patterns in various conditions, such as low light or fog, by combining data from both types of cameras.

What's the problem?

Thermal imaging is useful for many applications, like monitoring crops or inspecting buildings, but creating accurate 3D models from thermal images can be difficult. Thermal images often have lower resolution and fewer details than regular images, which makes it hard to reconstruct scenes accurately. Additionally, existing methods struggle to combine the information from thermal and visible light images effectively.

What's the solution?

To tackle these challenges, the authors developed a unified framework that uses both long-wave infrared (LWIR) images and regular RGB images to represent a scene. They first calibrate the two types of cameras to ensure they work well together. The method allows for the creation of detailed 3D models that capture thermal information and can even enhance the resolution of thermal images. It also helps reveal objects that might be hidden in either type of image by effectively merging the data.

Why it matters?

This research is important because it improves our ability to visualize and analyze thermal data in 3D, which can be applied in various fields such as agriculture, construction, and safety inspections. By providing a clearer understanding of heat patterns in different environments, ThermalNeRF can help professionals make better decisions based on accurate thermal imaging.

Abstract

Thermal imaging has a variety of applications, from agricultural monitoring to building inspection to imaging under poor visibility, such as in low light, fog, and rain. However, reconstructing thermal scenes in 3D presents several challenges due to the comparatively lower resolution and limited features present in long-wave infrared (LWIR) images. To overcome these challenges, we propose a unified framework for scene reconstruction from a set of LWIR and RGB images, using a multispectral radiance field to represent a scene viewed by both visible and infrared cameras, thus leveraging information across both spectra. We calibrate the RGB and infrared cameras with respect to each other, as a preprocessing step using a simple calibration target. We demonstrate our method on real-world sets of RGB and LWIR photographs captured from a handheld thermal camera, showing the effectiveness of our method at scene representation across the visible and infrared spectra. We show that our method is capable of thermal super-resolution, as well as visually removing obstacles to reveal objects that are occluded in either the RGB or thermal channels. Please see https://yvette256.github.io/thermalnerf for video results as well as our code and dataset release.