VLA-0: Building State-of-the-Art VLAs with Zero Modification
Ankit Goyal, Hugo Hadfield, Xuning Yang, Valts Blukis, Fabio Ramos
2025-10-17
Summary
This paper explores a new, surprisingly effective way to build robots that can understand and follow instructions given in everyday language, combining vision (what the robot 'sees') with language and actions.
What's the problem?
Currently, building robots that can perform tasks based on language commands is difficult. Existing methods often make things overly complicated by changing how language models work or adding extra parts specifically for controlling actions. Researchers haven't really focused on the simplest approach: just treating actions *as* text.
What's the solution?
The researchers created a model called VLA-0 which simply represents actions as text alongside the visual information and language instructions. They then carefully designed how this model works. Surprisingly, VLA-0 performed better than more complex models on several tests, even those trained with much more data. They tested it both in simulated environments and with a real robot.
Why it matters?
This research shows that you don't necessarily need complicated designs to build capable robots. Using a simple text-based approach for actions can be incredibly powerful, potentially making it easier and cheaper to develop robots that can understand and respond to human instructions in the real world. It also challenges the assumption that more complex models are always better.
Abstract
Vision-Language-Action models (VLAs) hold immense promise for enabling generalist robot manipulation. However, the best way to build them remains an open question. Current approaches often add complexity, such as modifying the existing vocabulary of a Vision-Language Model (VLM) with action tokens or introducing special action heads. Curiously, the simplest strategy of representing actions directly as text has remained largely unexplored. This work introduces VLA-0 to investigate this idea. We find that VLA-0 is not only effective; it is surprisingly powerful. With the right design, VLA-0 outperforms more involved models. On LIBERO, a popular benchmark for evaluating VLAs, VLA-0 outperforms all existing methods trained on the same robotic data, including pi_0.5-KI, OpenVLA-OFT and SmolVLA. Furthermore, without large-scale robotics-specific training, it outperforms methods trained on large-scale robotic data, like pi_0.5-KI, pi_0, GR00T-N1 and MolmoAct. These findings also translate to the real world, where VLA-0 outperforms SmolVLA, a VLA model pre-trained on large-scale real data. This paper summarizes our unexpected findings and spells out the specific techniques required to unlock the high performance of this simple yet potent VLA design. Visual results, code, and trained models are provided here: https://vla0.github.io/.