< Explain other AI papers

Embodied Referring Expression Comprehension in Human-Robot Interaction

Md Mofijul Islam, Alexi Gladstone, Sujan Sarker, Ganesh Nanduru, Md Fahim, Keyan Du, Aman Chadha, Tariq Iqbal

2025-12-09

Embodied Referring Expression Comprehension in Human-Robot Interaction

Summary

This paper focuses on helping robots better understand what people tell them to do, especially when those instructions involve gestures and happen in the real world, not just a lab.

What's the problem?

Currently, it's hard for robots to understand human instructions because there aren't enough good datasets to train them on. Existing datasets are limited – they often only show things from one viewpoint, don't capture enough body language, and mostly focus on indoor settings. This makes it difficult for robots to work effectively alongside people in everyday environments.

What's the solution?

The researchers created a new, large dataset called Refer360 that includes videos of people giving instructions from many different angles, both indoors and outdoors, capturing both what they say and how they move. They also developed a new technique called MuRes, which helps robots focus on the most important parts of both the spoken words and the body language to better understand the instructions. MuRes essentially filters information to highlight what matters most.

Why it matters?

This work is important because it provides a better way to train robots to interact with humans more naturally. By having a more comprehensive dataset and a better understanding technique, robots can become more helpful and reliable partners in shared workspaces and everyday life, moving beyond just working in controlled environments.

Abstract

As robots enter human workspaces, there is a crucial need for them to comprehend embodied human instructions, enabling intuitive and fluent human-robot interaction (HRI). However, accurate comprehension is challenging due to a lack of large-scale datasets that capture natural embodied interactions in diverse HRI settings. Existing datasets suffer from perspective bias, single-view collection, inadequate coverage of nonverbal gestures, and a predominant focus on indoor environments. To address these issues, we present the Refer360 dataset, a large-scale dataset of embodied verbal and nonverbal interactions collected across diverse viewpoints in both indoor and outdoor settings. Additionally, we introduce MuRes, a multimodal guided residual module designed to improve embodied referring expression comprehension. MuRes acts as an information bottleneck, extracting salient modality-specific signals and reinforcing them into pre-trained representations to form complementary features for downstream tasks. We conduct extensive experiments on four HRI datasets, including the Refer360 dataset, and demonstrate that current multimodal models fail to capture embodied interactions comprehensively; however, augmenting them with MuRes consistently improves performance. These findings establish Refer360 as a valuable benchmark and exhibit the potential of guided residual learning to advance embodied referring expression comprehension in robots operating within human environments.