Task-Specified Compliance Bounds for Humanoids via Lipschitz-Constrained Policies

arXiv cs.RO / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces anisotropic Lipschitz-constrained policy (ALCP) for reinforcement learning in humanoid control, linking a task-space stiffness upper bound to a state-dependent Lipschitz-style constraint on the policy Jacobian.
  • The constraint is enforced during RL training with a hinge-squared spectral-norm penalty, enabling direction-dependent compliance while preserving physical interpretability.
  • It addresses limitations of prior Lipschitz-constrained policies that used a single scalar budget and lacked direct ties to physically meaningful compliance specifications.
  • Experiments on humanoid robots demonstrate that ALCP improves locomotion stability and impact robustness, while reducing oscillations and energy usage.

Abstract

Reinforcement learning (RL) has demonstrated substantial potential for humanoid bipedal locomotion and the control of complex motions. To cope with oscillations and impacts induced by environmental interactions, compliant control is widely regarded as an effective remedy. However, the model-free nature of RL makes it difficult to impose task-specified and quantitatively verifiable compliance objectives, and classical model-based stiffness designs are not directly applicable. Lipschitz-Constrained Policies (LCP), which regularize the local sensitivity of a policy via gradient penalties, have recently been used to smooth humanoid motions. Nevertheless, existing LCP-based methods typically employ a single scalar Lipschitz budget and lack an explicit connection to physically meaningful compliance specifications in real-world systems. In this study, we propose an anisotropic Lipschitz-constrained policy (ALCP) that maps a task-space stiffness upper bound to a state-dependent Lipschitz-style constraint on the policy Jacobian. The resulting constraint is enforced during RL training via a hinge-squared spectral-norm penalty, preserving physical interpretability while enabling direction-dependent compliance. Experiments on humanoid robots show that ALCP improves locomotion stability and impact robustness, while reducing oscillations and energy usage.