AI Navigate

Experience is the Best Teacher: Motivating Effective Exploration in Reinforcement Learning for LLMs

arXiv cs.AI / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • HeRL is a hindsight-experience guided reinforcement learning framework that uses failed trajectories and unmet rubrics as in-context guidance to steer LLMs toward desired behaviors beyond their current distribution.
  • It introduces a bonus reward to incentivize responses with greater potential for improvement under guidance, aiming to improve exploration and gradient estimation.
  • The authors report extensive experiments showing HeRL achieves superior performance across benchmarks and can benefit from experience-guided self-improvement at test time, with code available at https://github.com/sikelifei/HeRL.
  • This work offers a new approach to exploration in RL for LLMs, potentially enabling more robust generalization in reasoning tasks.

Abstract

Reinforcement Learning (RL) with rubric-based rewards has recently shown remarkable progress in enhancing general reasoning capabilities of Large Language Models (LLMs), yet still suffers from ineffective exploration confined to curent policy distribution. In fact, RL optimization can be viewed as steering the policy toward an ideal distribution that maximizes the rewards, while effective exploration should align efforts with desired target. Leveraging this insight, we propose HeRL, a Hindsight experience guided Reinforcement Learning framework to bootstrap effective exploration by explicitly telling LLMs the desired behaviors specified in rewards. Concretely, HeRL treats failed trajectories along with their unmet rubrics as hindsight experience, which serves as in-context guidance for the policy to explore desired responses beyond its current distribution. Additionally, we introduce a bonus reward to incentivize responses with greater potential for improvement under such guidance. HeRL facilitates effective learning from desired high quality samples without repeated trial-and-error from scratch, yielding a more accurate estimation of the expected gradient theoretically. Extensive experiments across various benchmarks demonstrate that HeRL achieves superior performance gains over baselines, and can further benefit from experience guided self-improvement at test time. Our code is available at https://github.com/sikelifei/HeRL.