< Explain other AI papers

TactAlign: Human-to-Robot Policy Transfer via Tactile Alignment

Youngsun Wi, Jessica Yin, Elvis Xiang, Akash Sharma, Jitendra Malik, Mustafa Mukadam, Nima Fazeli, Tess Hellebrekers

2026-02-20

TactAlign: Human-to-Robot Policy Transfer via Tactile Alignment

Summary

This paper introduces a new method called TactAlign that allows robots to learn skills by watching humans perform them, specifically focusing on tasks where touch is important. It's about getting robots to understand what humans *feel* when they're interacting with objects, even if the robot's 'sense of touch' is different from a human's.

What's the problem?

Teaching robots complex tasks by showing them is hard, especially when those tasks rely on a good sense of touch. Current methods often require the robot to have the *exact same* touch sensors as the human demonstrator, or they need a lot of paired data showing what the human felt and what the robot feels at the same time. This is limiting because robots and humans experience touch differently, and getting all that paired data is expensive and time-consuming. There's a gap between how humans and robots are built and how they sense the world.

What's the solution?

TactAlign solves this by creating a way to translate human touch signals into a format the robot can understand, even if their sensors are different. It doesn't need paired data or any special labels. The method essentially finds a common 'language' for touch by transforming both human and robot touch data into a shared representation. It uses a technique called 'rectified flow' and cleverly uses information about how humans and robots interact with objects to guide this translation process, even with very little demonstration data.

Why it matters?

This work is important because it makes it much easier to teach robots complex, touch-sensitive skills. It means we don't need expensive, identical sensors or tons of data to get robots to learn from human demonstrations. The ability to transfer skills 'zero-shot' – meaning the robot can perform a task it's never seen before based on human demonstration – is a big step towards more versatile and helpful robots that can work alongside people in real-world situations.

Abstract

Human demonstrations collected by wearable devices (e.g., tactile gloves) provide fast and dexterous supervision for policy learning, and are guided by rich, natural tactile feedback. However, a key challenge is how to transfer human-collected tactile signals to robots despite the differences in sensing modalities and embodiment. Existing human-to-robot (H2R) approaches that incorporate touch often assume identical tactile sensors, require paired data, and involve little to no embodiment gap between human demonstrator and the robots, limiting scalability and generality. We propose TactAlign, a cross-embodiment tactile alignment method that transfers human-collected tactile signals to a robot with different embodiment. TactAlign transforms human and robot tactile observations into a shared latent representation using a rectified flow, without paired datasets, manual labels, or privileged information. Our method enables low-cost latent transport guided by hand-object interaction-derived pseudo-pairs. We demonstrate that TactAlign improves H2R policy transfer across multiple contact-rich tasks (pivoting, insertion, lid closing), generalizes to unseen objects and tasks with human data (less than 5 minutes), and enables zero-shot H2R transfer on a highly dexterous tasks (light bulb screwing).