< Explain other AI papers

Motion Anything: Any to Motion Generation

Zeyu Zhang, Yiran Wang, Wei Mao, Danning Li, Rui Zhao, Biao Wu, Zirui Song, Bohan Zhuang, Ian Reid, Richard Hartley

2025-03-13

Motion Anything: Any to Motion Generation

Summary

This paper talks about Motion Anything, an AI tool that creates realistic human movements (like dance or actions) from text descriptions, music, or both, while focusing on the most important body parts and moments.

What's the problem?

Current AI models for generating movements either don’t focus on key body parts or struggle to combine text and music instructions well, making the results less accurate or natural.

What's the solution?

Motion Anything uses smart masking to highlight important movements and body parts based on the input, and combines text and music instructions smoothly to create better-synchronized motions. It also introduces a new dataset of text-music-dance pairs to train the model.

Why it matters?

This helps filmmakers, game developers, and VR creators generate lifelike character movements faster and more accurately, improving animations and interactive experiences.

Abstract

Conditional motion generation has been extensively studied in computer vision, yet two critical challenges remain. First, while masked autoregressive methods have recently outperformed diffusion-based approaches, existing masking models lack a mechanism to prioritize dynamic frames and body parts based on given conditions. Second, existing methods for different conditioning modalities often fail to integrate multiple modalities effectively, limiting control and coherence in generated motion. To address these challenges, we propose Motion Anything, a multimodal motion generation framework that introduces an Attention-based Mask Modeling approach, enabling fine-grained spatial and temporal control over key frames and actions. Our model adaptively encodes multimodal conditions, including text and music, improving controllability. Additionally, we introduce Text-Music-Dance (TMD), a new motion dataset consisting of 2,153 pairs of text, music, and dance, making it twice the size of AIST++, thereby filling a critical gap in the community. Extensive experiments demonstrate that Motion Anything surpasses state-of-the-art methods across multiple benchmarks, achieving a 15% improvement in FID on HumanML3D and showing consistent performance gains on AIST++ and TMD. See our project website https://steve-zeyu-zhang.github.io/MotionAnything