GAM: Hierarchical Graph-based Agentic Memory for LLM Agents

arXiv cs.AI / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes GAM, a hierarchical graph-based agentic memory framework designed to improve LLM agents’ long-term coherence by separating memory encoding from consolidation.
  • GAM decouples ongoing dialogue via an event progression graph and only integrates it into a topic associative network when semantic shifts occur, aiming to reduce interference from transient noise.
  • It adds a graph-guided, multi-factor retrieval strategy to increase context precision during long-horizon conversations.
  • Experiments on LoCoMo and LongDialQA report consistent improvements over state-of-the-art baselines in reasoning accuracy and efficiency.

Abstract

To sustain coherent long-term interactions, Large Language Model (LLM) agents must navigate the tension between acquiring new information and retaining prior knowledge. Current unified stream-based memory systems facilitate context updates but remain vulnerable to interference from transient noise. Conversely, discrete structured memory architectures provide robust knowledge retention but often struggle to adapt to evolving narratives. To address this, we propose GAM, a hierarchical Graph-based Agentic Memory framework that explicitly decouples memory encoding from consolidation to effectively resolve the conflict between rapid context perception and stable knowledge retention. By isolating ongoing dialogue in an event progression graph and integrating it into a topic associative network only upon semantic shifts, our approach minimizes interference while preserving long-term consistency. Additionally, we introduce a graph-guided, multi-factor retrieval strategy to enhance context precision. Experiments on LoCoMo and LongDialQA indicate that our method consistently outperforms state-of-the-art baselines in both reasoning accuracy and efficiency.