Dense Motion Captioning
Shiyao Xu, Benedetta Liberatori, Gül Varol, Paolo Rota
2025-11-10
Summary
This paper introduces a new challenge in understanding 3D human movement: figuring out *what* actions are happening and *when* they happen within a longer sequence, and then describing those actions with detailed captions.
What's the problem?
Currently, research focuses on creating motion *from* text, but understanding motion itself is lagging behind. Existing datasets aren't good enough for this task because they lack detailed, moment-by-moment labels of what's going on in the movement, and they usually only show very short, simple actions. It's hard to teach a computer to understand complex movements if the data doesn't show them.
What's the solution?
The researchers created a new, large dataset called CompMo with 60,000 motion sequences. These sequences are complex, containing multiple actions happening one after another, and each action is precisely labeled with its start and end time. They also built a model, DEMO, that combines a powerful language model with a simpler component that focuses on the motion data, allowing it to generate accurate captions that match the timing of the actions.
Why it matters?
This work is important because it provides the tools – a good dataset and a strong starting model – needed to advance research in understanding 3D human motion. Being able to understand what someone is doing from their movements has applications in areas like robotics, animation, and even healthcare, allowing computers to better interact with and interpret human behavior.
Abstract
Recent advances in 3D human motion and language integration have primarily focused on text-to-motion generation, leaving the task of motion understanding relatively unexplored. We introduce Dense Motion Captioning, a novel task that aims to temporally localize and caption actions within 3D human motion sequences. Current datasets fall short in providing detailed temporal annotations and predominantly consist of short sequences featuring few actions. To overcome these limitations, we present the Complex Motion Dataset (CompMo), the first large-scale dataset featuring richly annotated, complex motion sequences with precise temporal boundaries. Built through a carefully designed data generation pipeline, CompMo includes 60,000 motion sequences, each composed of multiple actions ranging from at least two to ten, accurately annotated with their temporal extents. We further present DEMO, a model that integrates a large language model with a simple motion adapter, trained to generate dense, temporally grounded captions. Our experiments show that DEMO substantially outperforms existing methods on CompMo as well as on adapted benchmarks, establishing a robust baseline for future research in 3D motion understanding and captioning.