MemReward: Graph-Based Experience Memory for LLM Reward Prediction with Limited Labels

arXiv cs.AI / 3/23/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • MemReward introduces a graph-based experience memory for LLM reinforcement learning with limited reward labels, storing each rollout (thinking process and final answer) as nodes in a heterogeneous graph and using a GNN to propagate rewards to unlabeled nodes during online optimization.
  • The framework models queries, thinking processes, and answers as nodes connected by similarity and structural edges, enabling reward signals to transfer across related experiences.
  • Experiments on Qwen2.5-3B and 1.5B across mathematics, question answering, and code generation show that with only 20% labels MemReward achieves about 97.3% of Oracle performance on 3B and 96.6% on 1.5B, surpassing Oracle on out-of-domain tasks.
  • Performance scales smoothly with the label budget, reaching 99.4% of Oracle at 70% labels, indicating strong data efficiency and practical potential for RLHF workflows.

Abstract

Training large language models (LLMs) for complex reasoning via reinforcement learning requires reward labels that specify whether the generated rollouts are correct. However, obtaining reward labels at scale often requires expensive human labeling or time-consuming verification procedures; for instance, evaluating mathematical proofs demands expert review, while open-ended question answering lacks definitive ground truth. When reward labels are limited, the effectiveness of reinforcement learning fine-tuning is constrained by the scarcity of reward labels. We introduce MemReward, a graph-based experience memory framework: an initial LLM policy generates rollouts for each query, each comprising a thinking process and a final answer, and these rollouts are stored as experience memory. Queries, thinking processes, and answers form nodes in a heterogeneous graph with similarity and structural edges; a GNN trained on labeled nodes propagates rewards to unlabeled rollouts during online optimization. Experiments on Qwen2.5-3B and 1.5B across mathematics, question answering, and code generation demonstrate that MemReward, with only 20% labels, achieves 97.3% of Oracle performance on 3B and 96.6% on 1.5B, surpassing Oracle on out-of-domain tasks. Performance scales smoothly with label budget, reaching 99.4% of Oracle at 70% labels.