Positive-Only Drifting Policy Optimization

arXiv cs.LG / 4/21/2026

📰 NewsModels & Research

Key Points

  • The paper proposes Positive-Only Drifting Policy Optimization (PODPO) for online reinforcement learning, aiming to avoid limitations of common Gaussian or flow-based policies and training tricks like heavy gradient clipping or trust regions.
  • PODPO is likelihood-free and gradient-clipping-free, using a generative “drifting model” to update policies through advantage-weighted local contrastive drifting.
  • Instead of correcting mistakes via post-hoc penalization of negative samples, PODPO learns using only positive-advantage samples to steer behavior toward high-return regions.
  • The method also leverages the local smoothness of the generative model to proactively prevent erroneous actions, positioning PODPO as a new direction for generative policy learning in online RL.

Abstract

In the field of online reinforcement learning (RL), traditional Gaussian policies and flow-based methods are often constrained by their unimodal expressiveness, complex gradient clipping, or stringent trust-region requirements. Moreover, they all rely on post-hoc penalization of negative samples to correct erroneous actions. This paper introduces Positive-Only Drifting Policy Optimization (PODPO), a likelihood-free and gradient-clipping-free generative approach for online RL. By leveraging the drifting model, PODPO performs policy updates via advantage-weighted local contrastive drifting. Relying solely on positive-advantage samples, it elegantly steers actions toward high-return regions while exploiting the inherent local smoothness of the generative model to enable proactive error prevention. In doing so, PODPO opens a promising new pathway for generative policy learning in online settings.

Positive-Only Drifting Policy Optimization | AI Navigate