Persistent Robot World Models: Stabilizing Multi-Step Rollouts via Reinforcement Learning

arXiv cs.RO / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles the failure mode of action-conditioned robot world models that degrade during autoregressive, multi-step rollouts because prediction errors compound over time.
  • It proposes an RL-based post-training method that trains the model on its own autoregressive rollouts (instead of ground-truth histories), including a diffusion-model-adapted contrastive RL objective with convergence guarantees.
  • A variable-length candidate rollout strategy is used to generate and compare multiple futures from the same state, reinforcing higher-fidelity predictions over lower-fidelity ones.
  • The approach introduces multi-view, clip-level visual fidelity rewards with low-variance training signals aggregated across camera views.
  • Experiments on the DROID dataset report new state-of-the-art rollout fidelity, including improvements in LPIPS/SSIM, strong win rates in paired comparisons, and an 80% preference rate in a blind human study.

Abstract

Action-conditioned robot world models generate future video frames of the manipulated scene given a robot action sequence, offering a promising alternative for simulating tasks that are difficult to model with traditional physics engines. However, these models are optimized for short-term prediction and break down when deployed autoregressively: each predicted clip feeds back as context for the next, causing errors to compound and visual quality to rapidly degrade. We address this through the following contributions. First, we introduce a reinforcement learning (RL) post-training scheme that trains the world model on its own autoregressive rollouts rather than on ground-truth histories. We achieve this by adapting a recent contrastive RL objective for diffusion models to our setting and show that its convergence guarantees carry over exactly. Second, we design a training protocol that generates and compares multiple candidate variable-length futures from the same rollout state, reinforcing higher-fidelity predictions over lower-fidelity ones. Third, we develop efficient, multi-view visual fidelity rewards that combine complementary perceptual metrics across camera views and are aggregated at the clip level for dense, low-variance training signal. Fourth, we show that our approach establishes a new state-of-the-art for rollout fidelity on the DROID dataset, outperforming the strongest baseline on all metrics (e.g., LPIPS reduced by 14% on external cameras, SSIM improved by 9.1% on the wrist camera), winning 98% of paired comparisons, and achieving an 80% preference rate in a blind human study.
広告

Persistent Robot World Models: Stabilizing Multi-Step Rollouts via Reinforcement Learning | AI Navigate