AI Navigate

Thermodynamics of Reinforcement Learning Curricula

arXiv cs.AI / 3/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • It links non-equilibrium thermodynamics to curriculum learning in reinforcement learning by modeling reward parameters as coordinates on a task manifold.
  • It shows that minimizing excess thermodynamic work yields curricula that are geodesics in task space, providing a geometric interpretation of curriculum design.
  • It introduces MEW (Minimum Excess Work), an algorithm to compute a principled schedule for temperature annealing in maximum-entropy RL.
  • It offers a framework connecting physics-inspired theory to practical RL training strategies, with potential implications for optimization and generalization.

Abstract

Connections between statistical mechanics and machine learning have repeatedly proven fruitful, providing insight into optimization, generalization, and representation learning. In this work, we follow this tradition by leveraging results from non-equilibrium thermodynamics to formalize curriculum learning in reinforcement learning (RL). In particular, we propose a geometric framework for RL by interpreting reward parameters as coordinates on a task manifold. We show that, by minimizing the excess thermodynamic work, optimal curricula correspond to geodesics in this task space. As an application of this framework, we provide an algorithm, "MEW" (Minimum Excess Work), to derive a principled schedule for temperature annealing in maximum-entropy RL.