VLA^2: Empowering Vision-Language-Action Models with an Agentic Framework for Unseen Concept Manipulation
Han Zhao, Jiaxuan Zhang, Wenxuan Song, Pengxiang Ding, Donglin Wang
2025-10-17
Summary
This paper introduces a new system, VLA^2, designed to improve how robots understand and interact with objects, especially ones they haven't seen before during training.
What's the problem?
Current robots using vision and language together are really good at following instructions for things they've already learned about. However, they struggle significantly when asked to manipulate objects with descriptions or appearances that weren't in their original training data. Imagine a robot trained on red blocks failing to pick up a blue block just because of the color difference – that’s the kind of problem this paper addresses. They have trouble generalizing to 'out-of-distribution' objects.
What's the solution?
The researchers created VLA^2, which builds upon an existing robotic system called OpenVLA. The key idea is to give VLA^2 the ability to look up information about unfamiliar objects online using web searches and to use object detection to better understand what it's seeing. So, if the robot encounters a new object, it can quickly find descriptions and images to help it figure out how to interact with it. They also created a challenging testing environment with new objects and descriptions to specifically evaluate how well VLA^2 handles these unfamiliar situations.
Why it matters?
This work is important because it makes robots much more adaptable and useful in real-world scenarios. Instead of needing to be specifically trained on every single object they might encounter, VLA^2 can leverage external knowledge to handle new situations effectively. The significant improvement in success rate, especially with difficult tasks, shows that this approach is a big step towards robots that can truly understand and interact with the world around them, even when things aren't exactly as they were trained.
Abstract
Current vision-language-action (VLA) models, pre-trained on large-scale robotic data, exhibit strong multi-task capabilities and generalize well to variations in visual and language instructions for manipulation. However, their success rate drops significantly when faced with object concepts outside the training data, such as unseen object descriptions and textures in the dataset. To address this, we propose a novel agentic framework, VLA^2, which leverages OpenVLA as the execution backbone and effectively leverages external modules such as web retrieval and object detection to provide visual and textual knowledge about target objects to the VLA. This approach mitigates generalization failure when handling out-of-distribution objects. Based on the LIBERO simulation environment, we introduced novel objects and object descriptions to construct a new evaluation benchmark with three difficulty levels to test the effectiveness of our method. Our framework successfully outperformed the current state-of-the-art models on our designed hard-level generalization benchmark. Compared to the standalone OpenVLA baseline, VLA^2 achieves a 44.2% improvement in the success rate in the hard-level benchmark and an average improvement of 20.2% in all customized environments without any performance degradation on in-domain tasks. Project website: https://vla-2.github.io.