AI Navigate

When Right Meets Wrong: Bilateral Context Conditioning with Reward-Confidence Correction for GRPO

arXiv cs.AI / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reexamines Group Relative Policy Optimization (GRPO), noting that GRPO treats each output as an independent sample and misses the contrast between correct and incorrect solutions within the same group.
  • It introduces Bilateral Context Conditioning (BICC), enabling cross-reference of successful and failed reasoning traces during optimization without additional sampling or auxiliary models.
  • It adds Reward-Confidence Correction (RCC) to stabilize training by dynamically adjusting the advantage baseline using reward-confidence covariance derived from a first-order variance-minimizing estimator.
  • The proposed methods yield a contrastive reformulation of GRPO with empirical improvements on mathematical reasoning benchmarks across multiple models and algorithms, and the code is released on GitHub.

Abstract

Group Relative Policy Optimization (GRPO) has emerged as an effective method for training reasoning models. While it computes advantages based on group mean, GRPO treats each output as an independent sample during the optimization and overlooks a vital structural signal: the natural contrast between correct and incorrect solutions within the same group, thus ignoring the rich, comparative data that could be leveraged by explicitly pitting successful reasoning traces against failed ones. To capitalize on this, we present a contrastive reformulation of GRPO, showing that the GRPO objective implicitly maximizes the margin between the policy ratios of correct and incorrect samples. Building on this insight, we propose Bilateral Context Conditioning (BICC), a mechanism that allows the model to cross-reference successful and failed reasoning traces during the optimization, enabling a direct information flow across samples. We further introduce Reward-Confidence Correction (RCC) to stabilize training by dynamically adjusts the advantage baseline in GRPO using reward-confidence covariance derived from the first-order approximation of the variance-minimizing estimator. Both mechanisms require no additional sampling or auxiliary models and can be adapted to all GRPO variants. Experiments on mathematical reasoning benchmarks demonstrate consistent improvements across comprehensive models and algorithms. Code is available at \href{https://github.com/Skylanding/BiCC}{https://github.com/Skylanding/BiCC}.