OGPO: Sample Efficient Full-Finetuning of Generative Control Policies

arXiv cs.LG / 5/6/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Off-policy Generative Policy Optimization (OGPO), a sample-efficient method to full-finetune generative control policies (e.g., diffusion- and flow-based) for robot learning.
  • OGPO improves data efficiency by keeping off-policy critic networks, enabling stronger data reuse and propagating policy gradients through the entire generative process via a modified PPO objective.
  • Experiments show state-of-the-art results across multiple manipulation settings, including multi-task learning, high-precision insertion, and dexterous control.
  • A key claimed capability is near full task success when fine-tuning poorly initialized behavior cloning policies with no expert data in the online replay buffer, using limited task-specific hyperparameter tuning.
  • The authors add practical stabilization techniques (e.g., success-buffer regularization, conservative advantages, χ² regularization, and Q-variance reduction) and provide an empirical study of mechanisms and failure modes for successful off-policy full-policy improvement.

Abstract

Generative control policies (GCPs), such as diffusion- and flow-based control policies, have emerged as effective parameterizations for robot learning. This work introduces Off-policy Generative Policy Optimization (OGPO), a sample-efficient algorithm for finetuning GCPs that maintains off-policy critic networks to maximize data reuse and propagate policy gradients through the full generative process of the policy via a modified PPO objective, using critics as the terminal reward. OGPO achieves state-of-the-art performance on manipulation tasks spanning multi-task settings, high-precision insertion, and dexterous control. To our knowledge, it is also the only method that can fine-tune poorly-initialized behavior cloning policies to near full task-success with no expert data in the online replay buffer, and does so with few task-specific hyperparameter tuning. Through extensive empirical investigations, we demonstrate the OGPO drastically outperforms methods alternatives on policy steering and learning residual corrections, and identify the key mechanisms behind its performance. We further introduce practical stabilizers, including success-buffer regularization, conservative advantages, \chi^2 regularization, and Q-variance reduction, to mitigate critic over-exploitation across state- and pixel-based settings. Beyond proposing OGPO, we conduct a systematic empirical study of GCP finetuning, identifying the stabilizing mechanisms and failure modes that govern successful off-policy full-policy improvement.