Cost-Matching Model Predictive Control for Efficient Reinforcement Learning in Humanoid Locomotion

arXiv cs.RO / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a cost-matching approach that integrates MPC with reinforcement learning for optimal humanoid locomotion using a parameterized MPC cost formulation based on centroidal dynamics.
  • It trains the MPC by evaluating cost-to-go along recorded state-action trajectories and updating parameters to reduce the gap between MPC-predicted values and measured returns, enabling efficient gradient-based learning.
  • The method is designed to avoid repeatedly solving the MPC optimization during training, significantly reducing computational burden compared with more direct MPC-in-the-loop learning setups.
  • Experiments in simulation on a commercial humanoid platform show improved locomotion performance and increased robustness to model mismatch and external disturbances versus manually tuned baselines.

Abstract

In this paper, we propose a cost-matching approach for optimal humanoid locomotion within a Model Predictive Control (MPC)-based Reinforcement Learning (RL) framework. A parameterized MPC formulation with centroidal dynamics is trained to approximate the action-value function obtained from high-fidelity closed-loop data. Specifically, the MPC cost-to-go is evaluated along recorded state-action trajectories, and the parameters are updated to minimize the discrepancy between MPC-predicted values and measured returns. This formulation enables efficient gradient-based learning while avoiding the computational burden of repeatedly solving the MPC problem during training. The proposed method is validated in simulation using a commercial humanoid platform. Results demonstrate improved locomotion performance and robustness to model mismatch and external disturbances compared with manually tuned baselines.