Hi-WM: Human-in-the-World-Model for Scalable Robot Post-Training

arXiv cs.RO / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Human-in-the-World-Model (Hi-WM), a robot post-training framework that uses a learned world model to replace much of the costly real-world human-in-the-loop process.
  • During training, the policy is rolled out inside the world model, and when trajectories become failure-prone, a human intervenes in the simulation to provide short corrective actions.
  • Hi-WM supports caching of intermediate states plus rollback and branching, enabling reuse of a single failure state to generate multiple corrective continuations and produce dense supervision on weak behaviors.
  • Experiments on five real-world settings (three manipulation tasks for rigid and deformable objects, using two policy backbones) show large real-world gains, including a 37.9-point average improvement over the base policy and strong correlation between world-model evaluation and real-world success (r = 0.953).

Abstract

Post-training is essential for turning pretrained generalist robot policies into reliable task-specific controllers, but existing human-in-the-loop pipelines remain tied to physical execution: each correction requires robot time, scene setup, resets, and operator supervision in the real world. Meanwhile, action-conditioned world models have been studied mainly for imagination, synthetic data generation, and policy evaluation. We propose \textbf{Human-in-the-World-Model (Hi-WM)}, a post-training framework that uses a learned world model as a reusable corrective substrate for failure-targeted policy improvement. A policy is first rolled out in closed loop inside the world model; when the rollout becomes incorrect or failure-prone, a human intervenes directly in the model to provide short corrective actions. Hi-WM caches intermediate states and supports rollback and branching, allowing a single failure state to be reused for multiple corrective continuations and yielding dense supervision around behaviors that the base policy handles poorly. The resulting corrective trajectories are then added back to the training set for post-training. We evaluate Hi-WM on three real-world manipulation tasks spanning both rigid and deformable object interaction, and on two policy backbones. Hi-WM improves real-world success by 37.9 points on average over the base policy and by 19.0 points over a world-model closed-loop baseline, while world-model evaluation correlates strongly with real-world performance (r = 0.953). These results suggest that world models can serve not only as generators or evaluators, but also as effective corrective substrates for scalable robot post-training.