< Explain other AI papers

RoboOmni: Proactive Robot Manipulation in Omni-modal Context

Siyin Wang, Jinlan Fu, Feihong Liu, Xinzhe He, Huangxuan Wu, Junhao Shi, Kexin Huang, Zhaoye Fei, Jingjing Gong, Zuxuan Wu, Yugang Jiang, See-Kiong Ng, Tat-Seng Chua, Xipeng Qiu

2025-10-29

RoboOmni: Proactive Robot Manipulation in Omni-modal Context

Summary

This paper introduces a new way for robots to understand what people want them to do, moving beyond just following direct commands. It focuses on robots figuring out intentions from everyday interactions like conversations, sounds in the environment, and what they see, and then proactively helping.

What's the problem?

Currently, robots that can understand both vision and language usually need very specific instructions. In real life, people don't always tell robots exactly what to do; we hint at things or expect them to understand from the situation. Existing robots struggle with this kind of indirect communication and can't anticipate our needs.

What's the solution?

The researchers created a system called RoboOmni. It's designed to process information from sight, sound (speech and other noises), and language all at once. RoboOmni uses a special framework that allows it to recognize intentions, confirm understanding, and then carry out actions. To train this system, they also built a large dataset called OmniAction, which includes lots of examples of people interacting with robots in different scenarios, with various sounds and backgrounds.

Why it matters?

This work is important because it brings robots closer to being truly helpful collaborators. By enabling robots to understand intentions proactively, instead of just reacting to commands, they can assist us more naturally and effectively in everyday tasks. This could lead to robots that are much more useful in homes, workplaces, and other real-world environments.

Abstract

Recent advances in Multimodal Large Language Models (MLLMs) have driven rapid progress in Vision-Language-Action (VLA) models for robotic manipulation. Although effective in many scenarios, current approaches largely rely on explicit instructions, whereas in real-world interactions, humans rarely issue instructions directly. Effective collaboration requires robots to infer user intentions proactively. In this work, we introduce cross-modal contextual instructions, a new setting where intent is derived from spoken dialogue, environmental sounds, and visual cues rather than explicit commands. To address this new setting, we present RoboOmni, a Perceiver-Thinker-Talker-Executor framework based on end-to-end omni-modal LLMs that unifies intention recognition, interaction confirmation, and action execution. RoboOmni fuses auditory and visual signals spatiotemporally for robust intention recognition, while supporting direct speech interaction. To address the absence of training data for proactive intention recognition in robotic manipulation, we build OmniAction, comprising 140k episodes, 5k+ speakers, 2.4k event sounds, 640 backgrounds, and six contextual instruction types. Experiments in simulation and real-world settings show that RoboOmni surpasses text- and ASR-based baselines in success rate, inference speed, intention recognition, and proactive assistance.