AI Navigate

Entropy-Preserving Reinforcement Learning

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that many policy gradient reinforcement learning methods naturally reduce entropy in explored trajectories during training, which can limit exploration and diversity.
  • It formally analyzes how leading policy gradient objectives affect entropy dynamics and identifies empirical factors, such as numerical precision, that significantly impact entropy behavior.
  • The authors propose explicit entropy-control mechanisms, including REPO, which modifies the advantage function to regulate entropy, and ADAPO, an adaptive asymmetric clipping approach.
  • Models trained with these entropy-preserving methods maintain diversity throughout training and yield final policies that are more performant and adaptable to new environments.
  • The work emphasizes actively monitoring and controlling entropy as a critical aspect of RL training rather than letting it drift uncontrolled.

Abstract

Policy gradient algorithms have driven many recent advancements in language model reasoning. An appealing property is their ability to learn from exploration on their own trajectories, a process crucial for fostering diverse and creative solutions. As we show in this paper, many policy gradient algorithms naturally reduce the entropy -- and thus the diversity of explored trajectories -- as part of training, yielding a policy increasingly limited in its ability to explore. In this paper, we argue that entropy should be actively monitored and controlled throughout training. We formally analyze the contributions of leading policy gradient objectives on entropy dynamics, identify empirical factors (such as numerical precision) that significantly impact entropy behavior, and propose explicit mechanisms for entropy control. These include REPO, a family of algorithms that modify the advantage function to regulate entropy, and ADAPO, an adaptive asymmetric clipping approach. Models trained with our entropy-preserving methods maintain diversity throughout training, yielding final policies that are more performant and retain their trainability for sequential learning in new environments.