AI Navigate

AdaMem: Adaptive User-Centric Memory for Long-Horizon Dialogue Agents

arXiv cs.CL / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • AdaMem introduces an adaptive, user-centric memory framework for long-horizon dialogue agents, organizing dialogue history into working, episodic, persona, and graph memories to preserve context, long-term user traits, and relation-aware connections.
  • At inference time, AdaMem resolves the target participant, builds a question-conditioned retrieval route that blends semantic retrieval with selective relation-aware graph expansion when needed, and uses a role-specialized pipeline for evidence synthesis and response generation.
  • The approach addresses key limitations of prior memory systems, including overreliance on semantic similarity, fragmentation of related experiences, and static memory granularity.
  • Evaluation on LoCoMo and PERSONAMEM benchmarks shows state-of-the-art performance in long-horizon reasoning and user modeling, with code to be released upon acceptance.
  • The work aims to improve consistency, personalization, and reasoning in long interactions for AI agents.

Abstract

Large language model (LLM) agents increasingly rely on external memory to support long-horizon interaction, personalized assistance, and multi-step reasoning. However, existing memory systems still face three core challenges: they often rely too heavily on semantic similarity, which can miss evidence crucial for user-centric understanding; they frequently store related experiences as isolated fragments, weakening temporal and causal coherence; and they typically use static memory granularities that do not adapt well to the requirements of different questions. We propose AdaMem, an adaptive user-centric memory framework for long-horizon dialogue agents. AdaMem organizes dialogue history into working, episodic, persona, and graph memories, enabling the system to preserve recent context, structured long-term experiences, stable user traits, and relation-aware connections within a unified framework. At inference time, AdaMem first resolves the target participant, then builds a question-conditioned retrieval route that combines semantic retrieval with relation-aware graph expansion only when needed, and finally produces the answer through a role-specialized pipeline for evidence synthesis and response generation. We evaluate AdaMem on the LoCoMo and PERSONAMEM benchmarks for long-horizon reasoning and user modeling. Experimental results show that AdaMem achieves state-of-the-art performance on both benchmarks. The code will be released upon acceptance.