Learning to Plan, Planning to Learn: Adaptive Hierarchical RL-MPC for Sample-Efficient Decision Making

arXiv cs.RO / 4/17/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces an adaptive hierarchical reinforcement learning–MPC method that tightly couples hierarchical planning with learning for sample-efficient decision making.
  • It uses reinforcement learning-derived actions to guide the MPPI sampler and adaptively aggregates MPPI samples to update value estimation.
  • The approach performs additional MPPI exploration specifically when value estimates are uncertain, improving robustness during training and policy learning.
  • Experiments across race driving, a modified Acrobot, and Lunar Lander with obstacles show better data efficiency and higher performance.
  • Reported gains include up to a 72% increase in task success rate versus prior methods and faster convergence (2.1×) versus non-adaptive sampling.

Abstract

We propose a new approach for solving planning problems with a hierarchical structure, fusing reinforcement learning and MPC planning. Our formulation tightly and elegantly couples the two planning paradigms. It leverages reinforcement learning actions to inform the MPPI sampler, and adaptively aggregates MPPI samples to inform the value estimation. The resulting adaptive process leverages further MPPI exploration where value estimates are uncertain, and improves training robustness and the overall resulting policies. This results in a robust planning approach that can handle complex planning problems and easily adapts to different applications, as demonstrated over several domains, including race driving, modified Acrobot, and Lunar Lander with added obstacles. Our results in these domains show better data efficiency and overall performance in terms of both rewards and task success, with up to a 72% increase in success rate compared to existing approaches, as well as accelerated convergence (x2.1) compared to non-adaptive sampling.