< Explain other AI papers

Learning Human-Object Interaction for 3D Human Pose Estimation from LiDAR Point Clouds

Daniel Sungho Jung, Dohee Cho, Kyoung Mu Lee

2026-03-18

Learning Human-Object Interaction for 3D Human Pose Estimation from LiDAR Point Clouds

Summary

This paper focuses on improving how self-driving cars 'understand' people using data from LiDAR sensors, which create 3D maps of the environment. Specifically, it aims to better estimate the 3D pose – the position and orientation of joints – of pedestrians, even when they are interacting with objects around them.

What's the problem?

Estimating a person's pose from LiDAR data is tricky because people often interact with things like cars, bikes, or even just lean on walls. These interactions create confusion in the data; it's hard to tell which points belong to the person and which belong to the object they're touching. Also, the parts of the body that *are* interacting with objects, like hands and feet, often have fewer data points in the LiDAR scan, making them harder to detect accurately. There's an imbalance – we have lots of data for the main body but not enough for the interacting parts.

What's the solution?

The researchers developed a system called HOIL, which stands for Human-Object Interaction Learning. It tackles the problems by first using a technique called contrastive learning to clearly distinguish between points belonging to the person and points belonging to the object they're interacting with, especially in those confusing interaction areas. Second, it uses a 'pooling' method that focuses on the important points from interacting body parts, even if there aren't many of them, effectively giving those areas more attention. Finally, they added an optional step that looks at how the pose changes over time, using the fact that contact with objects usually happens consistently across multiple frames to refine the pose estimate.

Why it matters?

This work is important because accurately understanding pedestrians is crucial for self-driving car safety. If a car can't correctly identify where a person's joints are, it can't predict their movements and avoid collisions. By specifically addressing the challenges of human-object interactions, this research helps make autonomous vehicles more reliable and safer for everyone.

Abstract

Understanding humans from LiDAR point clouds is one of the most critical tasks in autonomous driving due to its close relationships with pedestrian safety, yet it remains challenging in the presence of diverse human-object interactions and cluttered backgrounds. Nevertheless, existing methods largely overlook the potential of leveraging human-object interactions to build robust 3D human pose estimation frameworks. There are two major challenges that motivate the incorporation of human-object interaction. First, human-object interactions introduce spatial ambiguity between human and object points, which often leads to erroneous 3D human keypoint predictions in interaction regions. Second, there exists severe class imbalance in the number of points between interacting and non-interacting body parts, with the interaction-frequent regions such as hand and foot being sparsely observed in LiDAR data. To address these challenges, we propose a Human-Object Interaction Learning (HOIL) framework for robust 3D human pose estimation from LiDAR point clouds. To mitigate the spatial ambiguity issue, we present human-object interaction-aware contrastive learning (HOICL) that effectively enhances feature discrimination between human and object points, particularly in interaction regions. To alleviate the class imbalance issue, we introduce contact-aware part-guided pooling (CPPool) that adaptively reallocates representational capacity by compressing overrepresented points while preserving informative points from interacting body parts. In addition, we present an optional contact-based temporal refinement that refines erroneous per-frame keypoint estimates using contact cues over time. As a result, our HOIL effectively leverages human-object interaction to resolve spatial ambiguity and class imbalance in interaction regions. Codes will be released.