Towards Effective Experiential Learning: Dual Guidance for Utilization and Internalization

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights a gap between current RLVR-based training of LLMs and how humans learn by combining external experience with internalized knowledge.
  • It proposes Dual Guidance Optimization (DGO), which uses both an external experience bank built from prior trajectories and the model’s internal knowledge to guide exploration during RLVR training.
  • DGO operates in a closed loop where new trajectories both improve the experience bank and update model parameters, iteratively strengthening utilization and internalization.
  • Experiments reportedly show DGO consistently outperforms baseline RLVR training methods on reasoning tasks, indicating more effective learning from experience.

Abstract

Recently, reinforcement learning~(RL) has become an important approach for improving the capabilities of large language models~(LLMs). In particular, reinforcement learning from verifiable rewards~(RLVR) has emerged as a promising paradigm for reasoning tasks. However, existing RL-based training still remains only a rough approximation to human learning. Human learners leverage both external and internal experience to guide exploration and gradually internalize useful trajectories into stable knowledge. Motivated by this gap, we ask: how can LLMs better utilize and internalize experience during RLVR training? To answer this question, we propose \textbf{D}ual \textbf{G}uidance \textbf{O}ptimization~(\textbf{DGO}), a unified framework that leverages \emph{external} and \emph{internal experience} to improve training effectiveness. Specifically, DGO first constructs an experience bank from previously explored trajectories. The policy then performs exploration under the joint guidance of the experience bank and the model's internal knowledge. The resulting trajectories are further used to refine the experience bank and optimize model parameters, forming a closed loop of experience utilization and internalization. Experiments show that DGO consistently outperforms baseline methods, suggesting that better utilization and internalization of experience lead to more effective reasoning.