MemReward: Graph-Based Experience Memory for LLM Reward Prediction with Limited Labels
arXiv cs.AI / 3/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- MemReward introduces a graph-based experience memory for LLM reinforcement learning with limited reward labels, storing each rollout (thinking process and final answer) as nodes in a heterogeneous graph and using a GNN to propagate rewards to unlabeled nodes during online optimization.
- The framework models queries, thinking processes, and answers as nodes connected by similarity and structural edges, enabling reward signals to transfer across related experiences.
- Experiments on Qwen2.5-3B and 1.5B across mathematics, question answering, and code generation show that with only 20% labels MemReward achieves about 97.3% of Oracle performance on 3B and 96.6% on 1.5B, surpassing Oracle on out-of-domain tasks.
- Performance scales smoothly with the label budget, reaching 99.4% of Oracle at 70% labels, indicating strong data efficiency and practical potential for RLHF workflows.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles
Santa Augmentcode Intent Ep.6
Dev.to

Your Agent Hired Another Agent. The Output Was Garbage. The Money's Gone.
Dev.to
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Palantir’s billionaire CEO says only two kinds of people will succeed in the AI era: trade workers — ‘or you’re neurodivergent’
Reddit r/artificial
Scaffolded Test-First Prompting: Get Correct Code From the First Run
Dev.to