< Explain other AI papers

EmbodiedMidtrain: Bridging the Gap between Vision-Language Models and Vision-Language-Action Models via Mid-training

Yiyang Du, Zhanqiu Guo, Xin Ye, Liu Ren, Chenyan Xiong

2026-04-27

EmbodiedMidtrain: Bridging the Gap between Vision-Language Models and Vision-Language-Action Models via Mid-training

Summary

This paper focuses on improving how well robots understand and follow instructions involving both vision (what they see) and language (what they're told to do). It addresses the issue that existing 'brain' models for robots often aren't well-suited for real-world tasks because they're built using general-purpose vision-language models that weren't specifically designed for robots.

What's the problem?

Currently, robots use vision-language models that were originally trained on huge datasets of images and text, but these datasets don't accurately represent the kinds of visual and language information a robot actually encounters while performing tasks. The data a robot needs is different, and using a general model directly doesn't lead to the best performance. Essentially, the robot's 'understanding' isn't quite aligned with the tasks it needs to do.

What's the solution?

The researchers developed a method called 'EmbodiedMidtrain'. They figured out that robot-related data is a small, distinct part of the larger image-text data. So, they created a system that carefully selects the most relevant images and text from the larger pool to 'retrain' the original vision-language model *before* it's used in a robot. This 'mid-training' process helps the model focus on the types of visual and language cues that are important for robots, making it better at understanding instructions and performing actions.

Why it matters?

This work is important because it allows robots to perform tasks more effectively without needing to create entirely new, massive models or spend huge amounts of computing power. By cleverly refining existing models, they can achieve performance comparable to more complex and expensive approaches. This makes advanced robot capabilities more accessible and practical for real-world applications, like helping with household chores or working in factories.

Abstract

Vision-Language-Action Models (VLAs) inherit their visual and linguistic capabilities from Vision-Language Models (VLMs), yet most VLAs are built from off-the-shelf VLMs that are not adapted to the embodied domain, limiting their downstream performance. In this work, we propose EmbodiedMidtrain to bridge the gap between VLMs and VLAs. We first characterize the data distribution gap between them, showing that VLA data occupy compact regions that are largely separated from the broader VLM distribution, while the degree of alignment varies substantially both across and within VLM data sources. Then, we build a mid-training data engine that leverages a lightweight learnable proximity estimator to select the most VLA-aligned candidates from a large VLM pool, and mid-trains the VLM on this curated mixture before downstream VLA fine-tuning. Experiments on three robot manipulation benchmarks show that mid-training consistently improves performance across different VLM backbones, achieving results competitive with expert VLAs and off-the-shelf VLMs trained with larger model scale and training budgets. Further analysis reveals that mid-training provides a stronger initialization for VLA fine-tuning, with gains emerging from the earliest steps and widening throughout training. Moreover, the data engine captures both dataset-level and sample-level alignment signals, favoring spatial reasoning over text-centric tasks while preserving the diversity of the VLM data. We will release all code, data and models for future research.