Moto: Latent Motion Token as the Bridging Language for Robot Manipulation
Yi Chen, Yuying Ge, Yizhuo Li, Yixiao Ge, Mingyu Ding, Ying Shan, Xihui Liu
2024-12-09

Summary
This paper talks about Moto, a new system that helps robots learn how to perform tasks by using a special method to understand and represent motion from videos.
What's the problem?
Robots often struggle to learn how to manipulate objects because they need a lot of labeled data showing what actions to take. Traditional methods require extensive action-labeled datasets, which are expensive and time-consuming to create. Additionally, robots need to understand motion in a way that allows them to transfer what they learn from videos to real-life actions.
What's the solution?
The authors propose Moto, which uses a Latent Motion Tokenizer to convert video content into sequences of motion tokens. These tokens represent the movements seen in the videos without needing labeled data. Moto-GPT is then trained using these tokens to learn how to predict and understand motion. After training, it can generate motion sequences and help robots perform tasks by translating learned motions into actual actions through a co-fine-tuning strategy that connects the learned motions with robot control.
Why it matters?
This research is important because it provides a new way for robots to learn from existing video data without the need for extensive labeling. By using motion tokens as a 'language' for understanding actions, Moto can enhance the efficiency and effectiveness of robot training, making it easier for robots to perform complex tasks in real-world situations.
Abstract
Recent developments in Large Language Models pre-trained on extensive corpora have shown significant success in various natural language processing tasks with minimal fine-tuning. This success offers new promise for robotics, which has long been constrained by the high cost of action-labeled data. We ask: given the abundant video data containing interaction-related knowledge available as a rich "corpus", can a similar generative pre-training approach be effectively applied to enhance robot learning? The key challenge is to identify an effective representation for autoregressive pre-training that benefits robot manipulation tasks. Inspired by the way humans learn new skills through observing dynamic environments, we propose that effective robotic learning should emphasize motion-related knowledge, which is closely tied to low-level actions and is hardware-agnostic, facilitating the transfer of learned motions to actual robot actions. To this end, we introduce Moto, which converts video content into latent Motion Token sequences by a Latent Motion Tokenizer, learning a bridging "language" of motion from videos in an unsupervised manner. We pre-train Moto-GPT through motion token autoregression, enabling it to capture diverse visual motion knowledge. After pre-training, Moto-GPT demonstrates the promising ability to produce semantically interpretable motion tokens, predict plausible motion trajectories, and assess trajectory rationality through output likelihood. To transfer learned motion priors to real robot actions, we implement a co-fine-tuning strategy that seamlessly bridges latent motion token prediction and real robot control. Extensive experiments show that the fine-tuned Moto-GPT exhibits superior robustness and efficiency on robot manipulation benchmarks, underscoring its effectiveness in transferring knowledge from video data to downstream visual manipulation tasks.