FASTER: Value-Guided Sampling for Fast RL

arXiv cs.LG / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureTools & Practical UsageModels & Research

Key Points

  • Reinforcement learning methods that use test-time sampling of multiple action candidates can be highly effective but computationally expensive due to selecting the best candidate after sampling.
  • The paper introduces FASTER, which recovers the benefits of sampling-based test-time scaling for diffusion-based policies by tracing and filtering action candidates earlier in the denoising process.
  • FASTER formulates the denoising-and-selection procedure as a Markov Decision Process (MDP) and learns a value-guided policy to progressively filter candidates while maximizing expected returns.
  • Experiments on long-horizon manipulation tasks show FASTER improves both online and batch-online reinforcement learning performance and attains the best results among compared approaches.
  • When applied to a pretrained VLA, FASTER achieves comparable performance while substantially reducing both training and inference compute requirements, and code is provided on GitHub.

Abstract

Some of the most performant reinforcement learning algorithms today can be prohibitively expensive as they use test-time scaling methods such as sampling multiple action candidates and selecting the best one. In this work, we propose FASTER, a method for getting the benefits of sampling-based test-time scaling of diffusion-based policies without the computational cost by tracing the performance gain of action samples back to earlier in the denoising process. Our key insight is that we can model the denoising of multiple action candidates and selecting the best one as a Markov Decision Process (MDP) where the goal is to progressively filter action candidates before denoising is complete. With this MDP, we can learn a policy and value function in the denoising space that predicts the downstream value of action candidates in the denoising process and filters them while maximizing returns. The result is a method that is lightweight and can be plugged into existing generative RL algorithms. Across challenging long-horizon manipulation tasks in online and batch-online RL, FASTER consistently improves the underlying policies and achieves the best overall performance among the compared methods. Applied to a pretrained VLA, FASTER achieves the same performance while substantially reducing training and inference compute requirements. Code is available at https://github.com/alexanderswerdlow/faster .