Learning How and What to Memorize: Cognition-Inspired Two-Stage Optimization for Evolving Memory

arXiv cs.CL / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how LLM agents can maintain long-term, evolving user memory despite limited context windows and the limitations of static or sparsely supervised RL-based memory update rules.
  • It proposes MemCoE, a cognition-inspired two-stage optimization framework that separates learning (how to organize memory) from decision-making (what to update).
  • In stage one, Memory Guideline Induction learns a global memory guideline using contrastive feedback treated as textual gradients.
  • In stage two, Guideline-Aligned Memory Policy Optimization uses the learned guideline to craft structured process rewards and trains a multi-turn RL policy for guideline-following memory updates.
  • Experiments on three personalization memory benchmarks show consistent gains over strong baselines, with improved robustness, transferability, and efficiency across preference types, memory sizes, and noise levels.

Abstract

Large language model (LLM) agents require long-term user memory for consistent personalization, but limited context windows hinder tracking evolving preferences over long interactions. Existing memory systems mainly rely on static, hand-crafted update rules; although reinforcement learning (RL)-based agents learn memory updates, sparse outcome rewards provide weak supervision, resulting in unstable long-horizon optimization. Drawing on memory schema theory and the functional division between prefrontal regions and hippocampus regions, we introduce MemCoE, a cognition-inspired two-stage optimization framework that learns how memory should be organized and what information to update. In the first stage, we propose Memory Guideline Induction to optimize a global guideline via contrastive feedback interpreted as textual gradients; in the second stage, Guideline-Aligned Memory Policy Optimization uses the induced guideline to define structured process rewards and performs multi-turn RL to learn a guideline-following memory evolution policy. We evaluate on three personalization memory benchmarks, covering explicit/implicit preference and different sizes and noise, and observe consistent improvements over strong baselines with favorable robustness, transferability, and efficiency.