Learning Humanoid End-Effector Control for Open-Vocabulary Visual Loco-Manipulation
Runpei Dong, Ziyan Li, Xialin He, Saurabh Gupta
2026-02-19
Summary
This paper introduces a new system, called HERO, that allows humanoid robots to pick up and move objects in everyday environments like offices and coffee shops. It focuses on making robots better at precisely controlling their hands while also understanding what they're looking at.
What's the problem?
Teaching robots to reliably grab and move things is hard because it requires both accurate hand control and the ability to understand the world around them. Current methods rely on showing the robot lots of examples, but getting enough examples in the real world is difficult and the robot doesn't usually do well when things change even a little bit. Basically, robots struggle to generalize what they've learned to new situations.
What's the solution?
The researchers combined the strengths of two approaches. They used powerful vision models, which are good at understanding images, with traditional robotics techniques for precise control. The key was creating a really accurate system for tracking where the robot's hand needs to be. This system uses math to figure out how to move the robot's arm, learns how the arm actually moves through a computer model, adjusts for errors, and replans if needed. This improved hand tracking accuracy significantly.
Why it matters?
This work is important because it offers a new way to train robots to interact with the world. By combining advanced vision with precise control, it makes robots more adaptable and capable of handling everyday tasks, opening the door for robots to be more helpful in our homes, workplaces, and public spaces.
Abstract
Visual loco-manipulation of arbitrary objects in the wild with humanoid robots requires accurate end-effector (EE) control and a generalizable understanding of the scene via visual inputs (e.g., RGB-D images). Existing approaches are based on real-world imitation learning and exhibit limited generalization due to the difficulty in collecting large-scale training datasets. This paper presents a new paradigm, HERO, for object loco-manipulation with humanoid robots that combines the strong generalization and open-vocabulary understanding of large vision models with strong control performance from simulated training. We achieve this by designing an accurate residual-aware EE tracking policy. This EE tracking policy combines classical robotics with machine learning. It uses a) inverse kinematics to convert residual end-effector targets into reference trajectories, b) a learned neural forward model for accurate forward kinematics, c) goal adjustment, and d) replanning. Together, these innovations help us cut down the end-effector tracking error by 3.2x. We use this accurate end-effector tracker to build a modular system for loco-manipulation, where we use open-vocabulary large vision models for strong visual generalization. Our system is able to operate in diverse real-world environments, from offices to coffee shops, where the robot is able to reliably manipulate various everyday objects (e.g., mugs, apples, toys) on surfaces ranging from 43cm to 92cm in height. Systematic modular and end-to-end tests in simulation and the real world demonstrate the effectiveness of our proposed design. We believe the advances in this paper can open up new ways of training humanoid robots to interact with daily objects.