< Explain other AI papers

ALICE-LRI: A General Method for Lossless Range Image Generation for Spinning LiDAR Sensors without Calibration Metadata

Samuel Soutullo, Miguel Yermo, David L. Vilariño, Óscar G. Lorenzo, José C. Cabaleiro, Francisco F. Rivera

2025-10-27

ALICE-LRI: A General Method for Lossless Range Image Generation for Spinning LiDAR Sensors without Calibration Metadata

Summary

This paper introduces a new method, ALICE-LRI, for creating 2D images from the 3D data collected by LiDAR sensors, which are commonly used in things like self-driving cars and mapping.

What's the problem?

LiDAR sensors create huge amounts of data, so it's helpful to convert it into a 2D format for easier processing. However, existing methods for doing this conversion lose some information, which is a problem for applications needing very precise data. Essentially, simplifying the data to make it easier to work with also makes it less accurate.

What's the solution?

ALICE-LRI automatically figures out the specific geometry of any spinning LiDAR sensor without needing any special information from the manufacturer. It does this by analyzing the data itself to understand how the laser beams are arranged and how to project the 3D points into a 2D image without losing any data. This allows for a perfect reconstruction of the original 3D point cloud from the 2D image.

Why it matters?

This is a big step forward because it allows for much more accurate remote sensing and mapping. By eliminating information loss during the conversion to 2D, applications requiring high precision, like detailed environmental monitoring or creating accurate maps for autonomous vehicles, can be significantly improved. It also opens up possibilities for better data compression and overall efficiency.

Abstract

3D LiDAR sensors are essential for autonomous navigation, environmental monitoring, and precision mapping in remote sensing applications. To efficiently process the massive point clouds generated by these sensors, LiDAR data is often projected into 2D range images that organize points by their angular positions and distances. While these range image representations enable efficient processing, conventional projection methods suffer from fundamental geometric inconsistencies that cause irreversible information loss, compromising high-fidelity applications. We present ALICE-LRI (Automatic LiDAR Intrinsic Calibration Estimation for Lossless Range Images), the first general, sensor-agnostic method that achieves lossless range image generation from spinning LiDAR point clouds without requiring manufacturer metadata or calibration files. Our algorithm automatically reverse-engineers the intrinsic geometry of any spinning LiDAR sensor by inferring critical parameters including laser beam configuration, angular distributions, and per-beam calibration corrections, enabling lossless projection and complete point cloud reconstruction with zero point loss. Comprehensive evaluation across the complete KITTI and DurLAR datasets demonstrates that ALICE-LRI achieves perfect point preservation, with zero points lost across all point clouds. Geometric accuracy is maintained well within sensor precision limits, establishing geometric losslessness with real-time performance. We also present a compression case study that validates substantial downstream benefits, demonstrating significant quality improvements in practical applications. This paradigm shift from approximate to lossless LiDAR projections opens new possibilities for high-precision remote sensing applications requiring complete geometric preservation.