DYMO-Hair: Generalizable Volumetric Dynamics Modeling for Robot Hair Manipulation
Chengyang Zhao, Uksang Yoo, Arkadeep Narayan Chaudhury, Giljoo Nam, Jonathan Francis, Jeffrey Ichnowski, Jean Oh
2025-10-08
Summary
This research introduces DYMO-Hair, a system that allows robots to style hair using a model that understands how hair moves and changes shape.
What's the problem?
Styling hair with robots is really hard! Hair is complex – it's fine, moves in unpredictable ways, and comes in many different styles. Existing robots struggle with this because they can't easily grasp the physics of hair, making it difficult for people with limited movement to get help with hair care and for robots to perform this task reliably.
What's the solution?
The researchers created a system that 'learns' how hair behaves. They did this by building a special computer simulation of hair and using it to train a model. This model can predict how hair will move when a robot tries to style it. The robot uses this prediction to plan its movements and achieve a desired hairstyle, even for styles it hasn't seen before. They used a technique called 'latent space editing' to represent different hairstyles in a compact way, making it easier for the robot to generalize to new styles.
Why it matters?
This work is a big step towards making robots more helpful in everyday life, especially for people who need assistance with personal care. It shows that robots can learn to handle delicate and complex materials like hair, opening the door for robots to perform other similar tasks, like grooming or even assisting in medical procedures. It also demonstrates a new approach to robot control that relies on understanding the physics of the world, rather than just memorizing specific actions.
Abstract
Hair care is an essential daily activity, yet it remains inaccessible to individuals with limited mobility and challenging for autonomous robot systems due to the fine-grained physical structure and complex dynamics of hair. In this work, we present DYMO-Hair, a model-based robot hair care system. We introduce a novel dynamics learning paradigm that is suited for volumetric quantities such as hair, relying on an action-conditioned latent state editing mechanism, coupled with a compact 3D latent space of diverse hairstyles to improve generalizability. This latent space is pre-trained at scale using a novel hair physics simulator, enabling generalization across previously unseen hairstyles. Using the dynamics model with a Model Predictive Path Integral (MPPI) planner, DYMO-Hair is able to perform visual goal-conditioned hair styling. Experiments in simulation demonstrate that DYMO-Hair's dynamics model outperforms baselines on capturing local deformation for diverse, unseen hairstyles. DYMO-Hair further outperforms baselines in closed-loop hair styling tasks on unseen hairstyles, with an average of 22% lower final geometric error and 42% higher success rate than the state-of-the-art system. Real-world experiments exhibit zero-shot transferability of our system to wigs, achieving consistent success on challenging unseen hairstyles where the state-of-the-art system fails. Together, these results introduce a foundation for model-based robot hair care, advancing toward more generalizable, flexible, and accessible robot hair styling in unconstrained physical environments. More details are available on our project page: https://chengyzhao.github.io/DYMOHair-web/.