< Explain other AI papers

VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model

Yihao Wang, Pengxiang Ding, Lingxiao Li, Can Cui, Zirui Ge, Xinyang Tong, Wenxuan Song, Han Zhao, Wei Zhao, Pengxu Hou, Siteng Huang, Yifan Tang, Wenhui Wang, Ru Zhang, Jianyi Liu, Donglin Wang

2025-09-12

VLA-Adapter: An Effective Paradigm for Tiny-Scale Vision-Language-Action Model

Summary

This paper introduces a new way to build Vision-Language-Action (VLA) models, which are AI systems that can understand images and language to then perform actions, like a robot following instructions. The core idea is to make these models more efficient and easier to train.

What's the problem?

Currently, building good VLA models requires a huge amount of computing power and data. They typically rely on very large 'Vision-Language Models' that have already been trained on massive datasets, which is expensive and time-consuming. The challenge is to effectively connect what the model 'sees' and 'understands' (vision and language) to what it 'does' (action) without needing this massive pre-training.

What's the solution?

The researchers developed something called 'VLA-Adapter'. Instead of relying on a huge pre-trained model, VLA-Adapter figures out *exactly* what information from the image and language is most important for performing the action. It then uses a small, focused 'Policy module' with a special 'Bridge Attention' mechanism to inject only that crucial information into the action-planning process. This allows the model to perform well with a much smaller overall size and without any prior training on robot data.

Why it matters?

This work is important because it significantly lowers the barrier to entry for building and deploying VLA models. It allows for training a powerful VLA model quickly – in just 8 hours on a standard computer – and with much less computational cost. This means more researchers and developers can work on robotic AI, and it paves the way for faster progress in creating robots that can understand and respond to our instructions in the real world.

Abstract

Vision-Language-Action (VLA) models typically bridge the gap between perceptual and action spaces by pre-training a large-scale Vision-Language Model (VLM) on robotic data. While this approach greatly enhances performance, it also incurs significant training costs. In this paper, we investigate how to effectively bridge vision-language (VL) representations to action (A). We introduce VLA-Adapter, a novel paradigm designed to reduce the reliance of VLA models on large-scale VLMs and extensive pre-training. To this end, we first systematically analyze the effectiveness of various VL conditions and present key findings on which conditions are essential for bridging perception and action spaces. Based on these insights, we propose a lightweight Policy module with Bridge Attention, which autonomously injects the optimal condition into the action space. In this way, our method achieves high performance using only a 0.5B-parameter backbone, without any robotic data pre-training. Extensive experiments on both simulated and real-world robotic benchmarks demonstrate that VLA-Adapter not only achieves state-of-the-art level performance, but also offers the fast inference speed reported to date. Furthermore, thanks to the proposed advanced bridging paradigm, VLA-Adapter enables the training of a powerful VLA model in just 8 hours on a single consumer-grade GPU, greatly lowering the barrier to deploying the VLA model. Project page: https://vla-adapter.github.io/.