Learning Vision-Language-Action World Models for Autonomous Driving

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces VLA-World, a vision-language-action world model for autonomous driving that aims to improve foresight and safety by adding temporal dynamics and global consistency beyond typical VLA end-to-end driving models.
  • VLA-World first generates next-frame future imagery using an action-derived feasible trajectory, then performs reflective reasoning over its own imagined future to refine the predicted trajectory.
  • To enable training and evaluation, the authors curate nuScenes-GR-20K, a generative reasoning dataset derived from nuScenes, and train the system with a three-stage pipeline (pretraining, supervised fine-tuning, and reinforcement learning).
  • Experiments reportedly show VLA-World outperforms state-of-the-art VLA and world-model baselines on both planning and future-generation benchmarks, with improved interpretability.

Abstract

Vision-Language-Action (VLA) models have recently achieved notable progress in end-to-end autonomous driving by integrating perception, reasoning, and control within a unified multimodal framework. However, they often lack explicit modeling of temporal dynamics and global world consistency, which limits their foresight and safety. In contrast, world models can simulate plausible future scenes but generally struggle to reason about or evaluate the imagined future they generate. In this work, we present VLA-World, a simple yet effective VLA world model that unifies predictive imagination with reflective reasoning to improve driving foresight. VLA-World first uses an action-derived feasible trajectory to guide the generation of the next-frame image, capturing rich spatial and temporal cues that describe how the surrounding environment evolves. The model then reasons over this self-generated future imagined frame to refine the predicted trajectory, achieving higher performance and better interpretability. To support this pipeline, we curate nuScenes-GR-20K, a generative reasoning dataset derived from nuScenes, and employ a three-stage training strategy that includes pretraining, supervised fine-tuning, and reinforcement learning. Extensive experiments demonstrate that VLA-World consistently surpasses state-of-the-art VLA and world-model baselines on both planning and future-generation benchmarks. Project page: https://vlaworld.github.io