Sample-efficient Neuro-symbolic Proximal Policy Optimization

arXiv cs.AI / 4/29/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a neuro-symbolic version of Proximal Policy Optimization (PPO) aimed at reducing data needs in deep reinforcement learning for sparse-reward, long-horizon, multi-subgoal tasks.
  • It transfers partially learned logical policy specifications from easier environments to harder ones, using two symbolic-guidance mechanisms to steer learning.
  • The first method, H-PPO-Product, biases the action distribution during sampling time, while the second, H-PPO-SymLoss, adds symbolic regularization to the PPO objective.
  • Experiments on OfficeWorld, WaterWorld, and DoorKey show faster learning and higher final returns than standard PPO and a Reward Machine baseline, even when the symbolic knowledge is imperfect.
  • Overall, the results suggest that incorporating symbolic policy structure can significantly improve reinforcement learning efficiency and robustness in challenging planning problems.

Abstract

Deep Reinforcement Learning (DRL) algorithms often require a large amount of data and struggle in sparse-reward domains with long planning horizons and multiple sub-goals. In this paper, we propose a neuro-symbolic extension of Proximal Policy Optimization (PPO) that transfers partial logical policy specifications learned in easier instances to guide learning in more challenging settings. We introduce two integrations of symbolic guidance: (i) H-PPO-Product, which biases the action distribution at sampling time, and (ii) H-PPO-SymLoss, which augments the PPO loss with a symbolic regularization term. We evaluate our methods on three benchmarks (OfficeWorld, WaterWorld, and DoorKey), showing consistently faster learning and higher return at convergence than PPO and a Reward Machine baseline, also under imperfect symbolic knowledge.