AnyMoLe: Any Character Motion In-betweening Leveraging Video Diffusion Models
Kwan Yun, Seokhyeon Hong, Chaelin Kim, Junyong Noh
2025-03-12
Summary
This paper talks about creating better tests (benchmarks) to check how well AI models handle software tasks like coding and bug fixes, along with tools to find and improve these tests.
What's the problem?
There are too many AI tests for coding tasks scattered around, making it hard to pick the right ones, and many existing tests have flaws or aren’t standardized.
What's the solution?
Researchers made BenchScout, a search tool to find relevant tests easily, and BenchFrame, a method to fix test flaws. They tested it by upgrading the HumanEval coding test to make it harder and more accurate.
Why it matters?
This helps developers fairly compare AI models for coding tasks, ensuring they work well in real-world projects and pushing AI to handle tougher challenges.
Abstract
Despite recent advancements in learning-based motion in-betweening, a key limitation has been overlooked: the requirement for character-specific datasets. In this work, we introduce AnyMoLe, a novel method that addresses this limitation by leveraging video diffusion models to generate motion in-between frames for arbitrary characters without external data. Our approach employs a two-stage frame generation process to enhance contextual understanding. Furthermore, to bridge the domain gap between real-world and rendered character animations, we introduce ICAdapt, a fine-tuning technique for video diffusion models. Additionally, we propose a ``motion-video mimicking'' optimization technique, enabling seamless motion generation for characters with arbitrary joint structures using 2D and 3D-aware features. AnyMoLe significantly reduces data dependency while generating smooth and realistic transitions, making it applicable to a wide range of motion in-betweening tasks.