FlashSAC: Fast and Stable Off-Policy Reinforcement Learning for High-Dimensional Robot Control

arXiv cs.LG / 4/7/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • FlashSAC is introduced as a fast, stable off-policy reinforcement learning algorithm for high-dimensional robot control, building on Soft Actor-Critic to address limitations of on-policy methods like PPO.
  • The approach reduces the number of critic-related gradient updates while scaling model capacity and data throughput, motivated by scaling-law ideas from supervised learning.
  • FlashSAC improves training stability by explicitly bounding weight, feature, and gradient norms to curb critic error accumulation from bootstrapping on diverse replay data.
  • Experiments across 60+ tasks in 10 simulators show FlashSAC outperforming PPO and strong off-policy baselines in both final performance and training efficiency, especially for high-dimensional tasks such as dexterous manipulation.
  • In sim-to-real humanoid locomotion, FlashSAC is reported to cut training time from hours to minutes, highlighting its potential for practical transfer to real robots.

Abstract

Reinforcement learning (RL) is a core approach for robot control when expert demonstrations are unavailable. On-policy methods such as Proximal Policy Optimization (PPO) are widely used for their stability, but their reliance on narrowly distributed on-policy data limits accurate policy evaluation in high-dimensional state and action spaces. Off-policy methods can overcome this limitation by learning from a broader state-action distribution, yet suffer from slow convergence and instability, as fitting a value function over diverse data requires many gradient updates, causing critic errors to accumulate through bootstrapping. We present FlashSAC, a fast and stable off-policy RL algorithm built on Soft Actor-Critic. Motivated by scaling laws observed in supervised learning, FlashSAC sharply reduces gradient updates while compensating with larger models and higher data throughput. To maintain stability at increased scale, FlashSAC explicitly bounds weight, feature, and gradient norms, curbing critic error accumulation. Across over 60 tasks in 10 simulators, FlashSAC consistently outperforms PPO and strong off-policy baselines in both final performance and training efficiency, with the largest gains on high-dimensional tasks such as dexterous manipulation. In sim-to-real humanoid locomotion, FlashSAC reduces training time from hours to minutes, demonstrating the promise of off-policy RL for sim-to-real transfer.

FlashSAC: Fast and Stable Off-Policy Reinforcement Learning for High-Dimensional Robot Control | AI Navigate