Do World Action Models Generalize Better than VLAs? A Robustness Study

arXiv cs.RO / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper compares vision-language-action (VLA) policies with world action models (WAMs) for robot action planning, focusing on whether WAMs generalize and remain robust under unseen conditions and perturbations.
  • WAMs are described as being adapted from world models trained on large-scale video corpora to predict future states, with latent representations decoded into robot actions via minor modifications.
  • Experiments evaluate state-of-the-art VLA policies and recently released WAMs on LIBERO-Plus and RoboTwin 2.0-Plus under multiple visual and language perturbations.
  • Results indicate WAMs deliver strong robustness, including LingBot-VA at 74.2% success on RoboTwin 2.0-Plus and Cosmos-Policy at 82.2% on LIBERO-Plus.
  • The study finds VLAs can match WAM robustness on some tasks only with extensive, diverse robotic training, while hybrid approaches that partially incorporate video-based dynamics fall in between—implying that the method of integrating video priors matters.

Abstract

Robot action planning in the real world is challenging as it requires not only understanding the current state of the environment but also predicting how it will evolve in response to actions. Vision-language-action (VLA), which repurpose large-scale vision-language models for robot action generation using action experts, have achieved notable success across a variety of robotic tasks. Nevertheless, their performance remains constrained by the scope of their training data, exhibiting limited generalization to unseen scenarios and vulnerability to diverse contextual perturbations. More recently, world models have been revisited as an alternative to VLAs. These models, referred to as world action models (WAMs), are built upon world models that are trained on large corpora of video data to predict future states. With minor adaptations, their latent representation can be decoded into robot actions. It has been suggested that their explicit dynamic prediction capacity, combined with spatiotemporal priors acquired from web-scale video pretraining, enables WAMs to generalize more effectively than VLAs. In this paper, we conduct a comparative study of prominent state-of-the-art VLA policies and recently released WAMs. We evaluate their performance on the LIBERO-Plus and RoboTwin 2.0-Plus benchmarks under various visual and language perturbations. Our results show that WAMs achieve strong robustness, with LingBot-VA reaching 74.2% success rate on RoboTwin 2.0-Plus and Cosmos-Policy achieving 82.2% on LIBERO-Plus. While VLAs such as \pi_{0.5} can achieve comparable robustness on certain tasks, they typically require extensive training with diverse robotic datasets and varied learning objectives. Hybrid approaches that partially incorporate video-based dynamic learning exhibit intermediate robustness, highlighting the importance of how video priors are integrated.