EngramaBench: Evaluating Long-Term Conversational Memory with Structured Graph Retrieval
arXiv cs.CL / 4/24/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces EngramaBench, a new benchmark to evaluate long-term conversational memory across multiple sessions using five personas and 150 queries covering recall, integration, temporal reasoning, adversarial abstention, and synthesis.
- It compares the graph-structured memory system Engrama against GPT-4o full-context prompting and Mem0, while keeping the answering model as GPT-4o to isolate the impact of memory architecture.
- GPT-4o full-context prompting achieves the best overall (composite) score, but Engrama is the only approach that outperforms full-context on cross-space reasoning.
- Mem0 is the most cost-effective option, yet it performs substantially worse than the other two systems on the benchmark.
- Ablation results suggest a trade-off: Engrama’s components that improve cross-space reasoning reduce its overall composite performance, highlighting a tension between specialized structured memory and aggregate optimization.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to
AI Visibility Tracking Exploded in 2026: 6 Tools Every Brand Needs Now
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to