< Explain other AI papers

Mogo: RQ Hierarchical Causal Transformer for High-Quality 3D Human Motion Generation

Dongjie Fu

2024-12-12

Mogo: RQ Hierarchical Causal Transformer for High-Quality 3D Human Motion Generation

Summary

This paper introduces Mogo, a new system that generates realistic 3D human motions from text descriptions. It combines the strengths of different AI models to create high-quality animations for video games and other multimedia applications.

What's the problem?

Existing models for generating human motion from text, especially BERT-type models, produce better quality outputs but cannot stream their results like GPT-type models can. This limitation makes them less useful for real-time applications, such as video games. Additionally, these models struggle with generating motions that are different from what they were trained on.

What's the solution?

Mogo addresses these issues by using a single transformer model that generates high-quality 3D motions without needing extra steps or models. It includes two main parts: a special encoder (RVQ-VAE) that accurately breaks down motion sequences and a transformer that creates the motion sequences in a way that allows for smooth and continuous output. This setup enables Mogo to produce longer sequences of motion than previous models.

Why it matters?

Mogo is significant because it improves the generation of lifelike human movements, which is essential for creating realistic animations in video games and films. By combining the best features of existing models, Mogo not only enhances the quality of generated motions but also allows for more flexibility in how these motions can be used.

Abstract

In the field of text-to-motion generation, Bert-type Masked Models (MoMask, MMM) currently produce higher-quality outputs compared to GPT-type autoregressive models (T2M-GPT). However, these Bert-type models often lack the streaming output capability required for applications in video game and multimedia environments, a feature inherent to GPT-type models. Additionally, they demonstrate weaker performance in out-of-distribution generation. To surpass the quality of BERT-type models while leveraging a GPT-type structure, without adding extra refinement models that complicate scaling data, we propose a novel architecture, Mogo (Motion Only Generate Once), which generates high-quality lifelike 3D human motions by training a single transformer model. Mogo consists of only two main components: 1) RVQ-VAE, a hierarchical residual vector quantization variational autoencoder, which discretizes continuous motion sequences with high precision; 2) Hierarchical Causal Transformer, responsible for generating the base motion sequences in an autoregressive manner while simultaneously inferring residuals across different layers. Experimental results demonstrate that Mogo can generate continuous and cyclic motion sequences up to 260 frames (13 seconds), surpassing the 196 frames (10 seconds) length limitation of existing datasets like HumanML3D. On the HumanML3D test set, Mogo achieves a FID score of 0.079, outperforming both the GPT-type model T2M-GPT (FID = 0.116), AttT2M (FID = 0.112) and the BERT-type model MMM (FID = 0.080). Furthermore, our model achieves the best quantitative performance in out-of-distribution generation.