AI Navigate

Discounted Beta--Bernoulli Reward Estimation for Sample-Efficient Reinforcement Learning with Verifiable Rewards

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Discounted Beta--Bernoulli (DBB) reward estimation for Reinforcement Learning with Verifiable Rewards (RLVR) by modeling rewards as samples from a policy-induced distribution and formulating advantage as a distribution estimation problem.
  • DBB uses historical reward statistics to handle non-stationary distributions, trading unbiasedness for reduced and more stable variance to avoid variance collapse and achieve lower mean squared error than standard point estimation.
  • Empirical results on six in-distribution and three out-of-distribution benchmarks show that GRPO with DBB outperforms naive GRPO, with average Acc@8 improvements of 3.22/2.42 in-distribution and 12.49/6.92 out-of-distribution for 1.7B and 8B models, respectively, without extra compute or memory.
  • The approach targets sample inefficiency in group-based RLVR and promises improved reasoning capabilities for large language models through more reliable reward estimation.

Abstract

Reinforcement learning with verifiable rewards (RLVR) has emerged as an effective post-training paradigm for improving the reasoning capabilities of large language models. However, existing group-based RLVR methods often suffer from severe sample inefficiency. This inefficiency stems from reliance on point estimation of rewards from a small number of rollouts, leading to high estimation variance, variance collapse, and ineffective utilization of generated responses. In this work, we reformulate RLVR from a statistical estimation perspective by modeling rewards as samples drawn from a policy-induced distribution and casting advantage computation as the problem of estimating the reward distribution from finite data. Building on this view, we propose Discounted Beta--Bernoulli (DBB) reward estimation, which leverages historical reward statistics for the non-stationary distribution. Although biased, the resulting estimator exhibits reduced and stable variance, theoretically avoids estimated variance collapse, and achieves lower mean squared error than standard point estimation. Extensive experiments across six in-distribution and three out-of-distribution reasoning benchmarks demonstrate that GRPO with DBB consistently outperforms naive GRPO, achieving average Acc@8 improvements of 3.22/2.42 points in-distribution and 12.49/6.92 points out-of-distribution on the 1.7B and 8B models, respectively, without additional computational cost or memory usage.