Vision-Language-Action in Robotics: A Survey of Datasets, Benchmarks, and Data Engines

arXiv cs.RO / 4/28/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that Vision-Language-Action (VLA) progress is bottlenecked not mainly by model architecture, but by underdeveloped data infrastructure for embodied learning.
  • It provides a data-centric survey of VLA research, organizing work into three areas: datasets, benchmarks, and data engines.
  • The analysis finds a persistent fidelity–cost trade-off in large-scale dataset collection and highlights gaps in existing benchmarks for compositional generalization and long-horizon reasoning.
  • It compares data-engine paradigms (simulation-based, video-reconstruction, and automated task generation) and shows shared limitations around physical grounding and sim-to-real transfer.
  • The authors synthesize four open challenges—representation alignment, multimodal supervision, reasoning assessment, and scalable data generation—and advocate treating data infrastructure as a primary research focus.

Abstract

Despite remarkable progress in Vision--Language--Action (VLA) models, a central bottleneck remains underexamined: the data infrastructure that underlies embodied learning. In this survey, we argue that future advances in VLA will depend less on model architecture and more on the co-design of high-fidelity data engines and structured evaluation protocols. To this end, we present a systematic, data-centric analysis of VLA research organized around three pillars: datasets, benchmarks, and data engines. For datasets, we categorize real-world and synthetic corpora along embodiment diversity, modality composition, and action space formulation, revealing a persistent fidelity-cost trade-off that fundamentally constrains large-scale collection. For benchmarks, we analyze task complexity and environment structure jointly, exposing structural gaps in compositional generalization and long-horizon reasoning evaluation that existing protocols fail to address. For data engines, we examine simulation-based, video-reconstruction, and automated task-generation paradigms, identifying their shared limitations in physical grounding and sim-to-real transfer. Synthesizing these analyses, we distill four open challenges: representation alignment, multimodal supervision, reasoning assessment, and scalable data generation. Addressing them, we argue, requires treating data infrastructure as a first-class research problem rather than a background concern.