PriPG-RL: Privileged Planner-Guided Reinforcement Learning for Partially Observable Systems with Anytime-Feasible MPC

arXiv cs.RO / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • この論文は、部分観測下での強化学習(RL)を、学習時だけ利用できる特権(privileged)なプランナーとその状態/モデル情報を通じて改善する枠組みを提案しています。
  • 特権プランナーとして「いつでも実行可能(anytime-feasible)」なモデル予測制御(MPC)を導入し、学習エージェントは損失のある状態射影にもとづいて行動を学びます。
  • 学習側では、プランナー知識を蒸留する「Planner-to-Policy Soft Actor-Critic(P2P-SAC)」により、部分観測の不利を緩和してサンプル効率と最終性能の向上を狙います。
  • 理論的な解析に加え、NVIDIA Isaac Labでのシミュレーションと、障害物が多い環境でのUnitree Go2四足ロボットへの実機展開によって有効性を検証しています。

Abstract

This paper addresses the problem of training a reinforcement learning (RL) policy under partial observability by exploiting a privileged, anytime-feasible planner agent available exclusively during training. We formalize this as a Partially Observable Markov Decision Process (POMDP) in which a planner agent with access to an approximate dynamical model and privileged state information guides a learning agent that observes only a lossy projection of the true state. To realize this framework, we introduce an anytime-feasible Model Predictive Control (MPC) algorithm that serves as the planner agent. For the learning agent, we propose Planner-to-Policy Soft Actor-Critic (P2P-SAC), a method that distills the planner agent's privileged knowledge to mitigate partial observability and thereby improve both sample efficiency and final policy performance. We support this framework with rigorous theoretical analysis. Finally, we validate our approach in simulation using NVIDIA Isaac Lab and successfully deploy it on a real-world Unitree Go2 quadruped navigating complex, obstacle-rich environments.