EngramaBench: Evaluating Long-Term Conversational Memory with Structured Graph Retrieval

arXiv cs.CL / 4/24/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces EngramaBench, a new benchmark to evaluate long-term conversational memory across multiple sessions using five personas and 150 queries covering recall, integration, temporal reasoning, adversarial abstention, and synthesis.
  • It compares the graph-structured memory system Engrama against GPT-4o full-context prompting and Mem0, while keeping the answering model as GPT-4o to isolate the impact of memory architecture.
  • GPT-4o full-context prompting achieves the best overall (composite) score, but Engrama is the only approach that outperforms full-context on cross-space reasoning.
  • Mem0 is the most cost-effective option, yet it performs substantially worse than the other two systems on the benchmark.
  • Ablation results suggest a trade-off: Engrama’s components that improve cross-space reasoning reduce its overall composite performance, highlighting a tension between specialized structured memory and aggregate optimization.

Abstract

Large language model assistants are increasingly expected to retain and reason over information accumulated across many sessions. We introduce EngramaBench, a benchmark for long-term conversational memory built around five personas, one hundred multi-session conversations, and one hundred fifty queries spanning factual recall, cross-space integration, temporal reasoning, adversarial abstention, and emergent synthesis. We evaluate Engrama, a graph-structured memory system, against GPT-4o full-context prompting and Mem0, an open-source vector-retrieval memory system. All three use the same answering model (GPT-4o), isolating the effect of memory architecture. GPT-4o full-context achieves the highest composite score (0.6186), while Engrama scores 0.5367 globally but is the only system to score higher than full-context prompting on cross-space reasoning (0.6532 vs. 0.6291, n=30). Mem0 is cheapest but substantially weaker (0.4809). Ablations reveal that the components driving Engrama's cross-space advantage trade off against global composite score, exposing a systems-level tension between structured memory specialization and aggregate optimization.