ShapE-GRPO: Shapley-Enhanced Reward Allocation for Multi-Candidate LLM Training

arXiv cs.AI / 4/1/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces ShapE-GRPO, a Shapley-value–enhanced variant of Group Relative Policy Optimization designed for multi-candidate LLM training where the goal is to maximize set-level utility rather than individual-candidate quality.
  • It argues that existing GRPO-style methods give identical scalar rewards to all candidates, causing noisy gradients and allowing weaker candidates to “free-ride” on strong peers’ rewards.
  • ShapE-GRPO decomposes set-level rewards into candidate-specific signals using a cooperative-game-theory formulation, preserving Shapley value axioms while keeping computation efficient in polynomial time.
  • Experiments on multiple datasets show consistent improvements over standard GRPO, including faster convergence during training.

Abstract

In user-agent interaction scenarios such as recommendation, brainstorming, and code suggestion, Large Language Models (LLMs) often generate sets of candidate recommendations where the objective is to maximize the collective utility of the entire set rather than individual candidates independently. However, existing reinforcement learning post-training paradigms, such as Group Relative Policy Optimization (GRPO), typically assign the same set-level scalar reward to every candidate in the set. This leads to noisy training signals where poor candidates free-ride on the high reward produced by a single strong peer, resulting in suboptimal exploration. To address this, we propose Shapley-Enhanced GRPO (ShapE-GRPO). By leveraging the permutation-invariant nature of set-level utility, we derive a Shapley-enhanced formulation from cooperative game theory to decompose set-level rewards into granular, candidate-specific signals. We show that our formulation preserves the fundamental axioms of the Shapley value while remaining computationally efficient with polynomial-time complexity. Empirically, ShapE-GRPO consistently outperforms standard GRPO across diverse datasets with accelerated convergence during training.