AI Navigate

経験は最良の教師: LLM の強化学習における効果的な探索を促す

arXiv cs.AI / 2026/3/23

📰 ニュースIdeas & Deep AnalysisModels & Research

要点

  • HeRL は、事後的経験に基づくガイダンスを用いる強化学習フレームワークで、失敗した軌跡と満たされていないルーブリックを文脈内ガイダンスとして活用し、LLMを現在の分布を超えた望ましい挙動へと導く。
  • ガイダンスの下で改善の潜在性が高い応答を奨励するボーナス報酬を導入し、探索と勾配推定の向上を目指す。
  • 著者らは、HeRL がベンチマーク全体で卓越した性能を達成し、テスト時にも経験に基づく自己改善の恩恵を受けられることを示す広範な実験を報告しており、コードは https://github.com/sikelifei/HeRL で公開されている。
  • 本研究は、LLMs の強化学習における探索に新しいアプローチを提供し、推論タスクにおけるより堅牢な一般化を実現する可能性がある。

Abstract

Reinforcement Learning (RL) with rubric-based rewards has recently shown remarkable progress in enhancing general reasoning capabilities of Large Language Models (LLMs), yet still suffers from ineffective exploration confined to curent policy distribution. In fact, RL optimization can be viewed as steering the policy toward an ideal distribution that maximizes the rewards, while effective exploration should align efforts with desired target. Leveraging this insight, we propose HeRL, a Hindsight experience guided Reinforcement Learning framework to bootstrap effective exploration by explicitly telling LLMs the desired behaviors specified in rewards. Concretely, HeRL treats failed trajectories along with their unmet rubrics as hindsight experience, which serves as in-context guidance for the policy to explore desired responses beyond its current distribution. Additionally, we introduce a bonus reward to incentivize responses with greater potential for improvement under such guidance. HeRL facilitates effective learning from desired high quality samples without repeated trial-and-error from scratch, yielding a more accurate estimation of the expected gradient theoretically. Extensive experiments across various benchmarks demonstrate that HeRL achieves superior performance gains over baselines, and can further benefit from experience guided self-improvement at test time. Our code is available at https://github.com/sikelifei/HeRL.