AI Navigate

S-VAM: Shortcut Video-Action Model by Self-Distilling Geometric and Semantic Foresight

arXiv cs.CV / 3/18/2026

📰 NewsModels & Research

Key Points

  • S-VAM introduces a shortcut video-action model that foresees coherent geometric and semantic representations in a single forward pass, enabling real-time inference for manipulation tasks.
  • The method employs a self-distillation strategy that condenses multi-step denoising priors into one-step inference.
  • Vision foundation model representations from the diffusion model's multi-step generated videos are used as teacher targets, with lightweight decouplers learning to map noisy one-step features to these targets.
  • Extensive experiments in simulation and on real robots demonstrate that S-VAM outperforms state-of-the-art methods in efficiency and precision.
  • The project page provides details and resources for evaluating the approach.

Abstract

Video action models (VAMs) have emerged as a promising paradigm for robot learning, owing to their powerful visual foresight for complex manipulation tasks. However, current VAMs, typically relying on either slow multi-step video generation or noisy one-step feature extraction, cannot simultaneously guarantee real-time inference and high-fidelity foresight. To address this limitation, we propose S-VAM, a shortcut video-action model that foresees coherent geometric and semantic representations via a single forward pass. Serving as a stable blueprint, these foreseen representations significantly simplify the action prediction. To enable this efficient shortcut, we introduce a novel self-distillation strategy that condenses structured generative priors of multi-step denoising into one-step inference. Specifically, vision foundation model (VFM) representations extracted from the diffusion model's own multi-step generated videos provide teacher targets. Lightweight decouplers, as students, learn to directly map noisy one-step features to these targets. Extensive experiments in simulation and the real world demonstrate that our S-VAM outperforms state-of-the-art methods, enabling efficient and precise manipulation in complex environments. Our project page is https://haodong-yan.github.io/S-VAM/