AI Navigate

Hindsight-Anchored Policy Optimization: Turning Failure into Feedback in Sparse Reward Settings

arXiv cs.AI / 3/13/2026

📰 NewsModels & Research

Key Points

  • HAPO introduces a reinforcement learning optimization framework for sparse-reward environments that anchors learning to teacher demonstrations during failure via a hindsight mechanism.
  • It combines the Synthetic Success Injection (SSI) operator with a Thompson sampling–inspired gating mechanism to create a self-paced curriculum.
  • The authors prove asymptotic consistency, showing that the method recovers an unbiased on-policy gradient as the policy improves and teacher guidance naturally wanes.
  • By addressing advantage collapse and high-variance gradients in group-relative policy optimization (GRPO), HAPO aims to surpass the limitations of static teacher forcing.

Abstract

Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a promising paradigm for post-training reasoning models. However, group-based methods such as Group Relative Policy Optimization (GRPO) face a critical dilemma in sparse-reward settings: pure Reinforcement Learning (RL) suffers from advantage collapse and high-variance gradient estimation, while mixed-policy optimization introduces persistent distributional bias. To resolve this dilemma, we introduce Hindsight-Anchored Policy Optimization (HAPO). HAPO employs the Synthetic Success Injection (SSI) operator, a hindsight mechanism that selectively anchors optimization to teacher demonstrations during failure. This injection is governed by a Thompson sampling-inspired gating mechanism, creating an autonomous, self-paced curriculum. Theoretically, we demonstrate that HAPO achieves \textit{asymptotic consistency}: by naturally annealing the teacher signal as the policy improves, HAPO recovers the unbiased on-policy gradient. This ensures off-policy guidance acts as a temporary scaffold rather than a persistent ceiling, enabling the model to surpass the limitations of static teacher forcing.