< Explain other AI papers

Look Before Acting: Enhancing Vision Foundation Representations for Vision-Language-Action Models

Yulin Luo, Hao Chen, Zhuangzhe Wu, Bowen Sui, Jiaming Liu, Chenyang Gu, Zhuoyang Liu, Qiuxuan Feng, Jiale Yu, Shuo Gu, Peng Jia, Pheng-Ann Heng, Shanghang Zhang

2026-03-19

Look Before Acting: Enhancing Vision Foundation Representations for Vision-Language-Action Models

Summary

This paper focuses on improving how robots understand both what you tell them to do (language) and what they see (vision) to perform actions, specifically tasks involving manipulation like moving objects.

What's the problem?

Current robots using these vision-language-action models aren't very good at consistently using visual information throughout the entire process of figuring out *how* to do something. The models seem to rely less and less on what they 'see' as they get deeper into processing the instructions, which makes it hard for them to perform precise and complex actions. Essentially, they're losing sight of the important visual details.

What's the solution?

The researchers created a new model called DeepVision-VLA. It works by allowing the 'vision' part of the robot's brain to constantly communicate with the 'language-action' part, even in the deeper layers. This keeps the visual information relevant throughout the process. They also developed a method called Action-Guided Visual Pruning, which helps the robot focus on the *important* parts of what it sees, ignoring distractions, without requiring a lot of extra computing power.

Why it matters?

This research is important because it significantly improves a robot's ability to perform tasks in both simulated and real-world environments, outperforming previous methods. It also gives us a better understanding of *how* to build robots that can truly understand and react to their surroundings, paving the way for more capable and reliable robotic assistants.

Abstract

Vision-Language-Action (VLA) models have recently emerged as a promising paradigm for robotic manipulation, in which reliable action prediction critically depends on accurately interpreting and integrating visual observations conditioned on language instructions. Although recent works have sought to enhance the visual capabilities of VLA models, most approaches treat the LLM backbone as a black box, providing limited insight into how visual information is grounded into action generation. Therefore, we perform a systematic analysis of multiple VLA models across different action-generation paradigms and observe that sensitivity to visual tokens progressively decreases in deeper layers during action generation. Motivated by this observation, we propose DeepVision-VLA, built on a Vision-Language Mixture-of-Transformers (VL-MoT) framework. This framework enables shared attention between the vision foundation model and the VLA backbone, injecting multi-level visual features from the vision expert into deeper layers of the VLA backbone to enhance visual representations for precise and complex manipulation. In addition, we introduce Action-Guided Visual Pruning (AGVP), which leverages shallow-layer attention to prune irrelevant visual tokens while preserving task-relevant ones, reinforcing critical visual cues for manipulation with minimal computational overhead. DeepVision-VLA outperforms prior state-of-the-art methods by 9.0\% and 7.5\% on simulated and real-world tasks, respectively, providing new insights for the design of visually enhanced VLA models.