MemEvoBench: Benchmarking Memory MisEvolution in LLM Agents

arXiv cs.CL / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MemEvoBench, a new benchmark to measure “memory mis-evolution” (behavioral drift) in LLM agents caused by repeated exposure to misleading information.
  • It evaluates long-horizon memory safety using adversarial memory injection, noisy tool outputs, and biased feedback across QA-style tasks (7 domains, 36 risk types) and workflow-style tasks adapted from 20 Agent-SafetyBench environments.
  • The benchmark simulates memory evolution by running multi-round interactions with mixed benign and misleading memory pools.
  • Experiments show that representative models experience substantial safety degradation when memory is updated with biased information, and the analysis indicates memory evolution is a key driver of failures.
  • The authors conclude that defenses based only on static prompt strategies are not sufficient, highlighting an urgent need to secure memory evolution mechanisms in LLM agents.

Abstract

Equipping Large Language Models (LLMs) with persistent memory enhances interaction continuity and personalization but introduces new safety risks. Specifically, contaminated or biased memory accumulation can trigger abnormal agent behaviors. Existing evaluation methods have not yet established a standardized framework for measuring memory misevolution. This phenomenon refers to the gradual behavioral drift resulting from repeated exposure to misleading information. To address this gap, we introduce MemEvoBench, the first benchmark evaluating long-horizon memory safety in LLM agents against adversarial memory injection, noisy tool outputs, and biased feedback. The framework consists of QA-style tasks across 7 domains and 36 risk types, complemented by workflow-style tasks adapted from 20 Agent-SafetyBench environments with noisy tool returns. Both settings employ mixed benign and misleading memory pools within multi-round interactions to simulate memory evolution. Experiments on representative models reveal substantial safety degradation under biased memory updates. Our analysis suggests that memory evolution is a significant contributor to these failures. Furthermore, static prompt-based defenses prove insufficient, underscoring the urgency of securing memory evolution in LLM agents.