ViVa: A Video-Generative Value Model for Robot Reinforcement Learning

arXiv cs.RO / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ViVa, a video-generative value model designed for robot reinforcement learning to better estimate state values under partial observability and long-horizon tasks.
  • ViVa takes the robot’s current observation plus proprioception, then predicts future proprioception and a scalar value jointly, using a pretrained video generator to inject spatiotemporal priors into value estimation.
  • The approach targets a key limitation of prior VLM-based value models by capturing temporal dynamics rather than relying on static snapshot embeddings.
  • Integrated into the RECAP framework, ViVa reportedly improves real-world box assembly performance and produces more reliable value signals that track task progress.
  • Qualitative results suggest ViVa generalizes to novel objects across tasks, indicating that video-generative models may provide a promising foundation for value estimation in robotic settings.

Abstract

Vision-language-action (VLA) models have advanced robot manipulation through large-scale pretraining, but real-world deployment remains challenging due to partial observability and delayed feedback. Reinforcement learning addresses this via value functions, which assess task progress and guide policy improvement. However, existing value models built on vision-language models (VLMs) struggle to capture temporal dynamics, undermining reliable value estimation in long-horizon tasks. In this paper, we propose ViVa, a video-generative value model that repurposes a pretrained video generator for value estimation. Taking the current observation and robot proprioception as input, ViVa jointly predicts future proprioception and a scalar value for the current state. By leveraging the spatiotemporal priors of a pretrained video generator, our approach grounds value estimation in anticipated embodiment dynamics, moving beyond static snapshots to intrinsically couple value with foresight. Integrated into RECAP, ViVa delivers substantial improvements on real-world box assembly. Qualitative analysis across all three tasks confirms that ViVa produces more reliable value signals, accurately reflecting task progress. By leveraging spatiotemporal priors from video corpora, ViVa also generalizes to novel objects, highlighting the promise of video-generative models for value estimation.

ViVa: A Video-Generative Value Model for Robot Reinforcement Learning | AI Navigate