Motion2Motion: Cross-topology Motion Transfer with Sparse Correspondence
Ling-Hao Chen, Yuhong Zhang, Zixin Yin, Zhiyang Dou, Xin Chen, Jingbo Wang, Taku Komura, Lei Zhang
2025-08-20
Summary
This paper presents a new method called Motion2Motion that allows animations to be transferred between characters with very different bone structures, like a human and a quadruped, without needing lots of training data or perfectly matching bones.
What's the problem?
It's really hard to take an animation from one character and make another character do the same thing, especially if their skeletons are shaped differently. This is because there isn't a clear way to match up the bones one-to-one, and there aren't many datasets with animations for different kinds of characters that could be used to teach a computer how to do this.
What's the solution?
The Motion2Motion framework tackles this by using just a few example motions on the target character and a small set of identified matching bones between the source and target skeletons. This training-free approach makes it flexible and doesn't require extensive preparation.
Why it matters?
This is important because it makes it much easier for animators and game developers to reuse existing animations on new or different characters, saving a lot of time and effort. It can be used in real-world applications and has the potential for widespread use in creative industries.
Abstract
This work studies the challenge of transfer animations between characters whose skeletal topologies differ substantially. While many techniques have advanced retargeting techniques in decades, transfer motions across diverse topologies remains less-explored. The primary obstacle lies in the inherent topological inconsistency between source and target skeletons, which restricts the establishment of straightforward one-to-one bone correspondences. Besides, the current lack of large-scale paired motion datasets spanning different topological structures severely constrains the development of data-driven approaches. To address these limitations, we introduce Motion2Motion, a novel, training-free framework. Simply yet effectively, Motion2Motion works with only one or a few example motions on the target skeleton, by accessing a sparse set of bone correspondences between the source and target skeletons. Through comprehensive qualitative and quantitative evaluations, we demonstrate that Motion2Motion achieves efficient and reliable performance in both similar-skeleton and cross-species skeleton transfer scenarios. The practical utility of our approach is further evidenced by its successful integration in downstream applications and user interfaces, highlighting its potential for industrial applications. Code and data are available at https://lhchen.top/Motion2Motion.