Rethinking Token-Level Credit Assignment in RLVR: A Polarity-Entropy Analysis

arXiv cs.LG / 4/14/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines the token-level credit assignment problem in RLVR, where sparse, outcome-based rewards make it hard for LLM tokens to receive accurate learning signals.
  • It introduces the Four Quadrant Decomposition diagnostic, using reward polarity and token entropy to isolate how token updates relate to reasoning gains.
  • Through ablations and theory, the authors argue that a token’s credit capacity is upper-bounded by its entropy, and they predict reasoning improvements come primarily from high-entropy tokens with distinct behaviors for positive vs. negative updates.
  • A gradient analysis of GRPO shows that uniformly broadcast rewards weaken the learning signal at high-entropy positions while over-crediting more deterministic tokens.
  • Based on these findings, the proposed Entropy-Aware Policy Optimization (EAPO) adjusts token-level learning signals and demonstrates improved performance over strong baselines across two model families.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has substantially improved the reasoning ability of Large Language Models (LLMs). However, its sparse outcome-based rewards pose a fundamental credit assignment problem. We analyze this problem through the joint lens of reward polarity and token entropy. Our diagnostic tool, the Four Quadrant Decomposition, isolates token updates by polarity and entropy, and controlled ablations show that reasoning improvements concentrate in the high-entropy quadrants. To justify this observation theoretically, we adapt Conditional Mutual Information to the autoregressive RLVR setting and prove that the credit a token can carry is upper-bounded by its entropy. This view yields testable predictions that reasoning gains arise primarily from high-entropy tokens, with unique roles for positive and negative updates. A gradient analysis of GRPO further reveals how uniform reward broadcast dilutes signal at high-entropy positions while over-crediting deterministic tokens. Grounded in these insights, we propose Entropy-Aware Policy Optimization (EAPO) that modulates token-level learning signals accordingly. Extensive experiments demonstrate that EAPO outperforms strong baselines across two model families.