EmbodiedMidtrain: Bridging the Gap between Vision-Language Models and Vision-Language-Action Models via Mid-training

arXiv cs.CL / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Vision-Language-Action Models (VLAs) often start from off-the-shelf Vision-Language Models (VLMs) that are not adapted to embodied settings, creating a distribution gap that limits downstream performance.
  • The paper shows that VLA data forms compact regions largely separated from the broader VLM distribution, with alignment strength varying significantly across different VLM data sources and within them.
  • It introduces “EmbodiedMidtrain,” a mid-training data engine that uses a lightweight learnable proximity estimator to select VLA-aligned candidates from a large VLM pool and then mid-trains the VLM on a curated mixture before VLA fine-tuning.
  • Experiments on three robot manipulation benchmarks demonstrate consistent performance improvements across multiple VLM backbones, reaching results competitive with expert VLAs and larger-scale, budget-trained off-the-shelf VLMs.
  • The authors find mid-training yields a stronger initialization for VLA fine-tuning, with benefits appearing from the earliest training steps and growing over time, while the data engine captures alignment at both dataset and sample levels and preserves diversity.

Abstract

Vision-Language-Action Models (VLAs) inherit their visual and linguistic capabilities from Vision-Language Models (VLMs), yet most VLAs are built from off-the-shelf VLMs that are not adapted to the embodied domain, limiting their downstream performance. In this work, we propose EmbodiedMidtrain to bridge the gap between VLMs and VLAs. We first characterize the data distribution gap between them, showing that VLA data occupy compact regions that are largely separated from the broader VLM distribution, while the degree of alignment varies substantially both across and within VLM data sources. Then, we build a mid-training data engine that leverages a lightweight learnable proximity estimator to select the most VLA-aligned candidates from a large VLM pool, and mid-trains the VLM on this curated mixture before downstream VLA fine-tuning. Experiments on three robot manipulation benchmarks show that mid-training consistently improves performance across different VLM backbones, achieving results competitive with expert VLAs and off-the-shelf VLMs trained with larger model scale and training budgets. Further analysis reveals that mid-training provides a stronger initialization for VLA fine-tuning, with gains emerging from the earliest steps and widening throughout training. Moreover, the data engine captures both dataset-level and sample-level alignment signals, favoring spatial reasoning over text-centric tasks while preserving the diversity of the VLM data. We will release all code, data and models for future research.