Vision-Language-Action Models for Autonomous Driving: Past, Present, and Future
Tianshuai Hu, Xiaolu Liu, Song Wang, Yiyao Zhu, Ao Liang, Lingdong Kong, Guoyang Zhao, Zeying Gong, Jun Cen, Zhiyu Huang, Xiaoshuai Hao, Linfeng Li, Hang Song, Xiangtai Li, Jun Ma, Shaojie Shen, Jianke Zhu, Dacheng Tao, Ziwei Liu, Junwei Liang
2025-12-18
Summary
This paper is about a new way to build self-driving cars, moving beyond traditional methods that often struggle with complicated situations. It explores how combining vision, language understanding, and action planning can create more reliable and human-like autonomous systems.
What's the problem?
Traditional self-driving systems use a step-by-step process – first 'seeing' the world, then 'deciding' what to do, and finally 'acting'. This works okay in simple cases, but breaks down when things get complex or unexpected. Errors in the 'seeing' step get passed down, messing up the whole process. While some systems try to directly link what they 'see' to actions, these are hard to understand, don't adapt well to new situations, and can't easily follow instructions.
What's the solution?
The paper looks at a new approach called Vision-Language-Action (VLA). This combines understanding images, processing language, and deciding on actions all together. They categorize these VLA systems into two main types: some combine everything into one big model, while others separate careful thinking (using language models) from quick, safety-focused actions. They also break down these systems further based on how they generate actions and how they use guidance. Finally, they review the datasets used to test these systems.
Why it matters?
This research is important because it aims to make self-driving cars safer, more reliable, and easier for humans to understand. By using language, these systems can potentially follow instructions better and handle unexpected situations more gracefully, ultimately leading to autonomous vehicles that are more trustworthy and aligned with human expectations.
Abstract
Autonomous driving has long relied on modular "Perception-Decision-Action" pipelines, where hand-crafted interfaces and rule-based components often break down in complex or long-tailed scenarios. Their cascaded design further propagates perception errors, degrading downstream planning and control. Vision-Action (VA) models address some limitations by learning direct mappings from visual inputs to actions, but they remain opaque, sensitive to distribution shifts, and lack structured reasoning or instruction-following capabilities. Recent progress in Large Language Models (LLMs) and multimodal learning has motivated the emergence of Vision-Language-Action (VLA) frameworks, which integrate perception with language-grounded decision making. By unifying visual understanding, linguistic reasoning, and actionable outputs, VLAs offer a pathway toward more interpretable, generalizable, and human-aligned driving policies. This work provides a structured characterization of the emerging VLA landscape for autonomous driving. We trace the evolution from early VA approaches to modern VLA frameworks and organize existing methods into two principal paradigms: End-to-End VLA, which integrates perception, reasoning, and planning within a single model, and Dual-System VLA, which separates slow deliberation (via VLMs) from fast, safety-critical execution (via planners). Within these paradigms, we further distinguish subclasses such as textual vs. numerical action generators and explicit vs. implicit guidance mechanisms. We also summarize representative datasets and benchmarks for evaluating VLA-based driving systems and highlight key challenges and open directions, including robustness, interpretability, and instruction fidelity. Overall, this work aims to establish a coherent foundation for advancing human-compatible autonomous driving systems.