Mask World Model: Predicting What Matters for Robust Robot Policy Learning

arXiv cs.RO / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current world-model approaches for generalist robot policy learning overfit to irrelevant visual factors when they predict high-fidelity RGB video.
  • It proposes Mask World Model (MWM), which uses video diffusion to predict the evolution of semantic masks rather than pixels, creating a geometric information bottleneck.
  • By focusing on semantic/contact dynamics, MWM aims to better capture essential physical interactions while filtering out distracting visual noise.
  • The method combines a mask-dynamics backbone with a diffusion-based policy head for end-to-end robot control.
  • Experiments on LIBERO and RLBench simulations, plus real-world tests and robustness checks (random token pruning), show MWM outperforms RGB-based world models and remains resilient to texture information loss.

Abstract

World models derived from large-scale video generative pre-training have emerged as a promising paradigm for generalist robot policy learning. However, standard approaches often focus on high-fidelity RGB video prediction, this can result in overfitting to irrelevant factors, such as dynamic backgrounds and illumination changes. These distractions reduce the model's ability to generalize, ultimately leading to unreliable and fragile control policies. To address this, we introduce the Mask World Model (MWM), which leverages video diffusion architectures to predict the evolution of semantic masks instead of pixels. This shift imposes a geometric information bottleneck, forcing the model to capture essential physical dynamics and contact relations while filtering out visual noise. We seamlessly integrate this mask dynamics backbone with a diffusion-based policy head to enable robust end-to-end control. Extensive evaluations demonstrate the superiority of MWM on the LIBERO and RLBench simulation benchmarks, significantly outperforming the state-of-the-art RGB-based world models. Furthermore, real-world experiments and robustness evaluation (via random token pruning) reveal that MWM exhibits superior generalization capabilities and robust resilience to texture information loss.