ContractionPPO: Certified Reinforcement Learning via Differentiable Contraction Layers

arXiv cs.RO / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ContractionPPO introduces a state-dependent contraction metric layer to PPO RL, enabling certifiable robust planning and control for legged robots.
  • The contraction metric is parameterized as a Lipschitz neural network and trained jointly with the policy, either in parallel or as an auxiliary head.
  • Although the contraction metric is not deployed during real-world execution, the approach derives upper bounds on the worst-case contraction rate to ensure simulation-to-real-world generalization.
  • Hardware experiments on quadruped locomotion demonstrate robust, certifiably stable control under strong external perturbations.

Abstract

Legged locomotion in unstructured environments demands not only high-performance control policies but also formal guarantees to ensure robustness under perturbations. Control methods often require carefully designed reference trajectories, which are challenging to construct in high-dimensional, contact-rich systems such as quadruped robots. In contrast, Reinforcement Learning (RL) directly learns policies that implicitly generate motion, and uniquely benefits from access to privileged information, such as full state and dynamics during training, that is not available at deployment. We present ContractionPPO, a framework for certified robust planning and control of legged robots by augmenting Proximal Policy Optimization (PPO) RL with a state-dependent contraction metric layer. This approach enables the policy to maximize performance while simultaneously producing a contraction metric that certifies incremental exponential stability of the simulated closed-loop system. The metric is parameterized as a Lipschitz neural network and trained jointly with the policy, either in parallel or as an auxiliary head of the PPO backbone. While the contraction metric is not deployed during real-world execution, we derive upper bounds on the worst-case contraction rate and show that these bounds ensure the learned contraction metric generalizes from simulation to real-world deployment. Our hardware experiments on quadruped locomotion demonstrate that ContractionPPO enables robust, certifiably stable control even under strong external perturbations.