Stability of Control Lyapunov Function Guided Reinforcement Learning

arXiv cs.RO / 5/5/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key gap in reinforcement learning for humanoid locomotion by providing stability analysis for control policies derived from CLF-RL.
  • It models the RL task as an optimal control problem and proves exponential stability for both continuous- and discrete-time settings.
  • The stability theory covers not only the core control Lyapunov function (CLF) reward terms but also the extra reward terms commonly added in practical CLF-RL implementations.
  • Numerical experiments validate the theoretical bounds on benchmark systems including the double integrator and cart-pole.
  • The method is also demonstrated on a walking humanoid robot, where CLF-guided rewards produce stable periodic gaits (periodic orbits).

Abstract

Reinforcement learning (RL) has become the de facto method for achieving locomotion on humanoid robots in practice, yet stability analysis of the corresponding control policies is lacking. Recent work has attempted to merge control theoretic ideas with reinforcement learning through control guided learning. A notable example of this is the use of a control Lyapunov function (CLF) to synthesize the reinforcement learning rewards, a technique known as CLF-RL, which has shown practical success. This paper investigates the stability properties of optimal controllers using CLF-RL with the goal of bridging experimentally observed stability with theoretical guarantees. The RL problem is viewed as an optimal control problem and exponential stability is proven in both continuous and discrete time using both core CLF reward terms and the additional terms used in practice. The theoretical bounds are numerically verified on systems such as the double integrator and cart-pole. Finally, the CLF guided rewards are implemented for a walking humanoid robot to generate stable periodic orbits.