EmbodiedMidtrain: Bridging the Gap between Vision-Language Models and Vision-Language-Action Models via Mid-training
arXiv cs.CL / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Vision-Language-Action Models (VLAs) often start from off-the-shelf Vision-Language Models (VLMs) that are not adapted to embodied settings, creating a distribution gap that limits downstream performance.
- The paper shows that VLA data forms compact regions largely separated from the broader VLM distribution, with alignment strength varying significantly across different VLM data sources and within them.
- It introduces “EmbodiedMidtrain,” a mid-training data engine that uses a lightweight learnable proximity estimator to select VLA-aligned candidates from a large VLM pool and then mid-trains the VLM on a curated mixture before VLA fine-tuning.
- Experiments on three robot manipulation benchmarks demonstrate consistent performance improvements across multiple VLM backbones, reaching results competitive with expert VLAs and larger-scale, budget-trained off-the-shelf VLMs.
- The authors find mid-training yields a stronger initialization for VLA fine-tuning, with benefits appearing from the earliest training steps and growing over time, while the data engine captures alignment at both dataset and sample levels and preserves diversity.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to