Addressing Performance Saturation for LLM RL via Precise Entropy Curve Control

arXiv cs.LG / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that reinforcement learning for LLMs often hits performance saturation because entropy collapses, limiting exploration as training scales.
  • It introduces Entrocraft, a rejection-sampling method that lets users impose a precise, customized entropy schedule without adding objective regularization and without depending on a particular advantage estimator.
  • The authors provide theory linking per-step entropy changes to the advantage distribution, offering an explanation for why prior entropy-preserving or anti-collapse techniques can become unstable over long training.
  • Experiments show that performance saturation can be mitigated, with Entrocraft improving generalization, output diversity, and long-term training, including better results from a 4B model versus an 8B baseline and a 50% boost in pass@K.
  • A systematic entropy-schedule study finds that linear annealing (high initial entropy decaying to a slightly lower target) works best among the schedules tested.

Abstract

Reinforcement learning (RL) has unlocked complex reasoning abilities in large language models (LLMs). However, most RL algorithms suffer from performance saturation, preventing further gains as RL training scales. This problem can be characterized by the collapse of entropy, a key diagnostic for exploration in RL. Existing attempts have tried to prevent entropy collapse through regularization or clipping, but their resulting entropy curves often exhibit instability in the long term, which hinders performance gains. In this paper, we introduce Entrocraft, a simple rejection-sampling approach that realizes any user-customized entropy schedule by biasing the advantage distributions. Entrocraft requires no objective regularization and is advantage-estimator-agnostic. Theoretically, we relate per-step entropy change to the advantage distribution under minimal assumptions, which explains the behavior of existing RL and entropy-preserving methods. Entrocraft also enables a systematic study of entropy schedules, where we find that linear annealing, which starts high and decays to a slightly lower target, performs best. Empirically, Entrocraft addresses performance saturation, significantly improving generalization, output diversity, and long-term training. It enables a 4B model to outperform an 8B baseline, sustains improvement for up to 4x longer before plateauing, and raises pass@K by 50% over the baseline.