DGPO: Distribution Guided Policy Optimization for Fine Grained Credit Assignment

arXiv cs.LG / 5/6/2026

📰 NewsModels & Research

Key Points

  • The paper proposes DGPO, a critic-free reinforcement learning framework aimed at improving how large language models learn complex reasoning tasks.
  • It targets a key weakness of prior methods like Group Relative Policy Optimization, namely coarse, sequence-level credit assignment that makes it hard to pinpoint which reasoning steps matter in long chain-of-thought traces.
  • DGPO addresses training instability by rethinking the typical unbounded KL-divergence penalty: distribution deviation is used as a guidance signal instead of a strict penalty.
  • By reducing gradient instability and mode-seeking conservatism, the approach aims to enable more reliable exploration of new reasoning trajectories.
  • The work is presented as a new arXiv submission (arXiv:2605.03327v1), inviting further evaluation and validation of the method’s effectiveness.

Abstract

Reinforcement learning is crucial for aligning large language models to perform complex reasoning tasks. However, current algorithms such as Group Relative Policy Optimization suffer from coarse grained, sequence level credit assignment, which severely struggles to isolate pivotal reasoning steps within long Chain of Thought generations. Furthermore, the standard unbounded Kullback Leibler divergence penalty induces severe gradient instability and mode seeking conservatism, ultimately stifling the discovery of novel reasoning trajectories. To overcome these limitations, we introduce Distribution Guided Policy Optimization, a novel critic free reinforcement learning framework that reinterprets distribution deviation as a guiding signal rather than a rigid penalty.