VAMPO: Policy Optimization for Improving Visual Dynamics in Video Action Models

arXiv cs.RO / 3/23/2026

📰 NewsModels & Research

Key Points

  • VAMPO proposes a post-training framework that treats multi-step denoising in video action models as a sequential decision process and optimizes the denoising policy with rewards based on expert visual dynamics in latent space.
  • It addresses the objective mismatch in diffusion-based video predictors that prioritize globally plausible predictions over the precise visual dynamics needed for manipulation, reducing errors in object pose, spatial relations, and contact timing that downstream policies rely on.
  • The method introduces an Euler Hybrid sampler that injects stochasticity only at the first denoising step, enabling tractable low-variance policy-gradient estimation while preserving the coherence of the remaining denoising trajectory.
  • Across simulated and real-world manipulation tasks, VAMPO improves task-relevant visual dynamics and downstream action generation, with better generalization when combined with GRPO and a verifiable non-adversarial reward.

Abstract

Video action models are an appealing foundation for Vision--Language--Action systems because they can learn visual dynamics from large-scale video data and transfer this knowledge to downstream robot control. Yet current diffusion-based video predictors are trained with likelihood-surrogate objectives, which encourage globally plausible predictions without explicitly optimizing the precision-critical visual dynamics needed for manipulation. This objective mismatch often leads to subtle errors in object pose, spatial relations, and contact timing that can be amplified by downstream policies. We propose VAMPO, a post-training framework that directly improves visual dynamics in video action models through policy optimization. Our key idea is to formulate multi-step denoising as a sequential decision process and optimize the denoising policy with rewards defined over expert visual dynamics in latent space. To make this optimization practical, we introduce an Euler Hybrid sampler that injects stochasticity only at the first denoising step, enabling tractable low-variance policy-gradient estimation while preserving the coherence of the remaining denoising trajectory. We further combine this design with GRPO and a verifiable non-adversarial reward. Across diverse simulated and real-world manipulation tasks, VAMPO improves task-relevant visual dynamics, leading to better downstream action generation and stronger generalization. The homepage is https://vampo-robot.github.io/VAMPO/.