FAST: Efficient Action Tokenization for Vision-Language-Action Models
Karl Pertsch, Kyle Stachowicz, Brian Ichter, Danny Driess, Suraj Nair, Quan Vuong, Oier Mees, Chelsea Finn, Sergey Levine
2025-01-17
Summary
This paper talks about a new way to make robots learn complex tasks more efficiently. The researchers created a system called FAST (Frequency-space Action Sequence Tokenization) that helps AI understand and generate robot movements better and faster than previous methods.
What's the problem?
Current methods for teaching robots complex tasks using AI struggle when dealing with quick, precise movements. It's like trying to teach a robot to play piano by breaking down each finger movement separately, which becomes too complicated and slow when you're dealing with fast, intricate pieces.
What's the solution?
The researchers developed FAST, which is like teaching the robot to understand music in terms of overall patterns and rhythms instead of individual notes. It uses a mathematical technique called the discrete cosine transform to compress robot movements into a more manageable form. They also created FAST+, a universal version that works for many different types of robots and tasks. When combined with another AI system called pi0, FAST can learn from 10,000 hours of robot data much faster than other methods.
Why it matters?
This matters because it could make robots much better at learning complex tasks quickly. It's like giving robots a shortcut to understanding intricate movements, which could help them perform delicate surgeries, create detailed artwork, or handle fragile objects in factories. By making the learning process up to 5 times faster, it could also speed up research and development in robotics, potentially leading to more advanced and capable robots in various fields like healthcare, manufacturing, and exploration.
Abstract
Autoregressive sequence models, such as Transformer-based vision-language action (VLA) policies, can be tremendously effective for capturing complex and generalizable robotic behaviors. However, such models require us to choose a tokenization of our continuous action signals, which determines how the discrete symbols predicted by the model map to continuous robot actions. We find that current approaches for robot action tokenization, based on simple per-dimension, per-timestep binning schemes, typically perform poorly when learning dexterous skills from high-frequency robot data. To address this challenge, we propose a new compression-based tokenization scheme for robot actions, based on the discrete cosine transform. Our tokenization approach, Frequency-space Action Sequence Tokenization (FAST), enables us to train autoregressive VLAs for highly dexterous and high-frequency tasks where standard discretization methods fail completely. Based on FAST, we release FAST+, a universal robot action tokenizer, trained on 1M real robot action trajectories. It can be used as a black-box tokenizer for a wide range of robot action sequences, with diverse action spaces and control frequencies. Finally, we show that, when combined with the pi0 VLA, our method can scale to training on 10k hours of robot data and match the performance of diffusion VLAs, while reducing training time by up to 5x.