Demystifying Group Relative Policy Optimization: Its Policy Gradient is a U-Statistic

arXiv stat.ML / 3/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper develops a unified theoretical framework showing that Group Relative Policy Optimization (GRPO) policy gradients can be expressed as a U-statistic, clarifying why GRPO works in practice.
  • It derives statistical properties for GRPO, including a mean-squared-error characterization, finite-sample error bounds, and the asymptotic distribution of the suboptimality gap for the learned policy.
  • The authors prove GRPO is asymptotically equivalent to an “oracle” policy-gradient method that has access to a value function measuring policy quality at each training iteration, implying near-optimal long-run performance.
  • A universal scaling law is established to guide selection of the optimal group size, and experiments validate both the universality of the optimal group size and the oracle-like behavior.
  • Overall, the work links GRPO’s widely used empirical performance (notably in DeepSeekMath/DeepSeek-R1) to classical statistics, enabling more principled tuning and theoretical guarantees.

Abstract

Group relative policy optimization (GRPO), a core methodological component of DeepSeekMath and DeepSeek-R1, has emerged as a cornerstone for scaling reasoning capabilities of large language models. Despite its widespread adoption and the proliferation of follow-up works, the theoretical properties of GRPO remain less studied. This paper provides a unified framework to understand GRPO through the lens of classical U-statistics. We demonstrate that the GRPO policy gradient is inherently a U-statistic, allowing us to characterize its mean squared error (MSE), derive the finite-sample error bound and asymptotic distribution of the suboptimality gap for its learned policy. Our findings reveal that GRPO is asymptotically equivalent to an oracle policy gradient algorithm -- one with access to a value function that quantifies the goodness of its learning policy at each training iteration -- and achieves asymptotically optimal performance within a broad class of policy gradient algorithms. Furthermore, we establish a universal scaling law that offers principled guidance for selecting the optimal group size. Empirical experiments further validate our theoretical findings, demonstrating that the optimal group size is universal, and verify the oracle property of GRPO.