AI Navigate

Draft-and-Target Sampling for Video Generation Policy

arXiv cs.CV / 3/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Draft-and-Target Sampling, a training-free diffusion inference paradigm to improve the efficiency of video generation policies.
  • It introduces a self-play denoising approach with two complementary trajectories: draft sampling for fast global trajectory generation and target sampling for refinement with small steps.
  • Speed improvements are further achieved via token chunking and a progressive acceptance strategy to reduce redundant computation, achieving up to 2.1x speedup on three benchmarks.
  • The results show improved efficiency with minimal compromise to success rate, and the authors provide their code publicly.

Abstract

Video generation models have been used as a robot policy to predict the future states of executing a task conditioned on task description and observation. Previous works ignore their high computational cost and long inference time. To address this challenge, we propose Draft-and-Target Sampling, a novel diffusion inference paradigm for video generation policy that is training-free and can improve inference efficiency. We introduce a self-play denoising approach by utilizing two complementary denoising trajectories in a single model, draft sampling takes large steps to generate a global trajectory in a fast manner and target sampling takes small steps to verify it. To further speedup generation, we introduce token chunking and progressive acceptance strategy to reduce redundant computation. Experiments on three benchmarks show that our method can achieve up to 2.1x speedup and improve the efficiency of current state-of-the-art methods with minimal compromise to the success rate. Our code is available.