DexTrack: Towards Generalizable Neural Tracking Control for Dexterous Manipulation from Human References
Xueyi Liu, Jianibieke Adalibieke, Qianwei Han, Yuzhe Qin, Li Yi
2025-02-14
Summary
This paper talks about DexTrack, a new AI system designed to help robotic hands mimic human movements to handle objects skillfully. It uses advanced techniques to make the robot more adaptable and effective in different situations.
What's the problem?
Robotic hands struggle with complex tasks like picking up or manipulating objects because they require precise control and adaptability. Current methods, like reinforcement learning, often fail because they rely too much on specific task rewards or perfect models, which don't work well in real-world scenarios.
What's the solution?
The researchers created DexTrack, which trains robotic hands using examples of human movements paired with robot actions. They used a method called homotopy optimization to improve the quality and diversity of these examples. By combining reinforcement learning and imitation learning, DexTrack learns how to handle new objects and situations more effectively. This approach was tested in both simulations and real-world settings, showing over a 10% improvement compared to existing methods.
Why it matters?
This matters because it makes robots better at handling everyday tasks, like assisting in factories or helping people with disabilities. DexTrack's ability to adapt to new situations means robots could become more reliable and versatile in real-world applications, improving efficiency and accessibility.
Abstract
We address the challenge of developing a generalizable neural tracking controller for dexterous manipulation from human references. This controller aims to manage a dexterous robot hand to manipulate diverse objects for various purposes defined by kinematic human-object interactions. Developing such a controller is complicated by the intricate contact dynamics of dexterous manipulation and the need for adaptivity, generalizability, and robustness. Current <PRE_TAG>reinforcement learning</POST_TAG> and trajectory optimization methods often fall short due to their dependence on task-specific rewards or precise system models. We introduce an approach that curates large-scale successful robot tracking demonstrations, comprising pairs of human references and robot actions, to train a neural controller. Utilizing a data flywheel, we iteratively enhance the controller's performance, as well as the number and quality of successful tracking demonstrations. We exploit available tracking demonstrations and carefully integrate <PRE_TAG>reinforcement learning</POST_TAG> and imitation learning to boost the controller's performance in dynamic environments. At the same time, to obtain high-quality tracking demonstrations, we individually optimize per-trajectory tracking by leveraging the learned tracking controller in a homotopy optimization method. The homotopy optimization, mimicking chain-of-thought, aids in solving challenging trajectory tracking problems to increase demonstration diversity. We showcase our success by training a generalizable neural controller and evaluating it in both simulation and real world. Our method achieves over a 10% improvement in success rates compared to leading baselines. The project website with animated results is available at https://meowuu7.github.io/DexTrack/.