Seeing Realism from Simulation: Efficient Video Transfer for Vision-Language-Action Data Augmentation

arXiv cs.RO / 5/5/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • Vision-language-action (VLA) models often generalize poorly from simulated data due to a visual domain gap and limited environmental diversity compared with real-world videos.
  • The proposed framework improves simulation-to-real transfer by extracting structured conditions from simulation using video semantic segmentation and video captioning, then rewriting captions to diversify environments.
  • A conditional video transfer model synthesizes more realistic training videos while preserving task semantics and action trajectories from the original simulated VLA data.
  • To scale augmentation efficiently, the method introduces diffusion feature reuse across adjacent timesteps to speed generation and a coreset sampling strategy to select a compact, non-redundant subset under limited compute.
  • Experiments on Robotwin 2.0, LIBERO, LIBERO-Plus, and a real robotic platform show consistent gains, including an 8% improvement to RDT-1B on Robotwin 2.0 and a 5.1% boost to \u03c00 on LIBERO-Plus.

Abstract

Vision-language-action (VLA) models typically rely on large-scale real-world videos, whereas simulated data, despite being inexpensive and highly parallelizable to collect, often suffers from a substantial visual domain gap and limited environmental diversity, resulting in weak real-world generalization. We present an efficient video augmentation framework that converts simulated VLA videos into realistic training videos while preserving task semantics and action trajectories. Our pipeline extracts structured conditions from simulation via video semantic segmentation and video captioning, rewrites captions to diversify environments, and uses a conditional video transfer model to synthesize realistic videos. To make augmentation practical at scale, we introduce a diffusion feature-reuse mechanism that reuses video tokens across adjacent timesteps to accelerate generation, and a coreset sampling strategy that identifies a compact, non-redundant subset for augmentation under limited computation. Extensive experiments on Robotwin 2.0, LIBERO, LIBERO-Plus, and a real robotic platform demonstrate consistent improvements. For example, our method improves RDT-1B by 8% on Robotwin 2.0, and boosts \pi_0 by 5.1% on the more challenging LIBERO-Plus benchmark. Code is available at: https://github.com/nanfangxiansheng/Seeing-Realism-from-Simulation.

Seeing Realism from Simulation: Efficient Video Transfer for Vision-Language-Action Data Augmentation | AI Navigate