Temporal Residual Jacobians For Rig-free Motion Transfer
Sanjeev Muralikrishnan, Niladri Shekhar Dutt, Siddhartha Chaudhuri, Noam Aigerman, Vladimir Kim, Matthew Fisher, Niloy J. Mitra
2024-07-23

Summary
This paper introduces Temporal Residual Jacobians, a new method for transferring motion data to 3D models without needing any rigging or pre-defined shapes. It aims to make the process of animating characters in videos more efficient and accurate.
What's the problem?
Current methods for transferring motion from one character to another often rely on complex setups that require rigging (which is like creating a skeleton for the character) or specific shape keyframes (which are reference points for how the character should look at different times). This can be limiting and difficult, especially when trying to animate characters with different shapes or designs. Additionally, these methods can produce unnatural movements or artifacts in the animation.
What's the solution?
The authors propose a new approach called Temporal Residual Jacobians, which uses two neural networks working together to predict how a character should move over time. One network focuses on local geometric changes (how the shape of the character changes), while the other looks at temporal changes (how the motion evolves). These networks are trained together using 3D position data, allowing them to create smooth and realistic animations without needing any prior rigging or keyframes. During use, the model can extrapolate motion even when no keyframes are available, making it versatile for various character designs.
Why it matters?
This research is important because it simplifies the process of animating 3D characters, making it accessible for creators who may not have the resources or expertise to set up traditional rigging systems. By allowing for more natural and varied animations across different character shapes, this method can significantly enhance fields like video game development, animation, and virtual reality.
Abstract
We introduce Temporal Residual Jacobians as a novel representation to enable data-driven motion transfer. Our approach does not assume access to any rigging or intermediate shape keyframes, produces geometrically and temporally consistent motions, and can be used to transfer long motion sequences. Central to our approach are two coupled neural networks that individually predict local geometric and temporal changes that are subsequently integrated, spatially and temporally, to produce the final animated meshes. The two networks are jointly trained, complement each other in producing spatial and temporal signals, and are supervised directly with 3D positional information. During inference, in the absence of keyframes, our method essentially solves a motion extrapolation problem. We test our setup on diverse meshes (synthetic and scanned shapes) to demonstrate its superiority in generating realistic and natural-looking animations on unseen body shapes against SoTA alternatives. Supplemental video and code are available at https://temporaljacobians.github.io/ .