Autonomous Character-Scene Interaction Synthesis from Text Instruction
Nan Jiang, Zimo He, Zi Wang, Hongjie Li, Yixin Chen, Siyuan Huang, Yixin Zhu
2024-10-08

Summary
This paper discusses a new method for creating realistic human movements in 3D environments based on simple text instructions and a goal location, making it easier to animate characters without needing complex inputs.
What's the problem?
Animating characters in 3D environments is challenging, especially for complex actions like walking or reaching for objects. Current methods often require detailed user input to define waypoints and transitions, which makes it hard to automate the animation process from just basic instructions.
What's the solution?
The authors introduce a framework that uses a special model called an auto-regressive diffusion model to generate motion segments based on a single text instruction and a target location. They also created a new dataset with 16 hours of motion capture data to train their model. This approach allows the model to understand the scene better by considering both the starting point and the goal location, leading to smoother and more realistic movements.
Why it matters?
This work is significant because it simplifies the animation process, allowing users to create complex character movements just by describing them in words. This could have major applications in video games, movies, and virtual reality, making character interactions more engaging and realistic.
Abstract
Synthesizing human motions in 3D environments, particularly those with complex activities such as locomotion, hand-reaching, and human-object interaction, presents substantial demands for user-defined waypoints and stage transitions. These requirements pose challenges for current models, leading to a notable gap in automating the animation of characters from simple human inputs. This paper addresses this challenge by introducing a comprehensive framework for synthesizing multi-stage scene-aware interaction motions directly from a single text instruction and goal location. Our approach employs an auto-regressive diffusion model to synthesize the next motion segment, along with an autonomous scheduler predicting the transition for each action stage. To ensure that the synthesized motions are seamlessly integrated within the environment, we propose a scene representation that considers the local perception both at the start and the goal location. We further enhance the coherence of the generated motion by integrating frame embeddings with language input. Additionally, to support model training, we present a comprehensive motion-captured dataset comprising 16 hours of motion sequences in 120 indoor scenes covering 40 types of motions, each annotated with precise language descriptions. Experimental results demonstrate the efficacy of our method in generating high-quality, multi-stage motions closely aligned with environmental and textual conditions.