From Recall to Forgetting: Benchmarking Long-Term Memory for Personalized Agents
arXiv cs.CL / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper highlights a gap in existing long-term memory benchmarks for personalized agents, which mostly test fact retrieval rather than consolidation and adaptation under changing circumstances.
- It introduces Memora, a new benchmark designed to evaluate long-term memory over weeks to months, using three memory-grounded tasks: remembering, reasoning, and recommending.
- To improve reliability, Memora uses automated memory-grounding checks plus human evaluation, and the paper proposes Forgetting-Aware Memory Accuracy (FAMA) to penalize use of obsolete or invalidated memories.
- Experiments with four LLMs and six memory-augmented agents find frequent reuse of invalid memories and difficulty reconciling memories as they evolve, leading to only marginal gains from memory agents.
- Overall, the results suggest that current approaches to long-term memory remain insufficient for the requirements of personalized agents operating over long periods with frequent knowledge updates.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to