MemORAI: Memory Organization and Retrieval via Adaptive Graph Intelligence for LLM Conversational Agents

arXiv cs.CL / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that current LLM conversational systems lack persistent, long-term personalized memory, leading to weaker continuity across turns.
  • It proposes MemORAI, which improves graph-based memory by selectively filtering what to store and using dual-layer compression to keep persona-relevant content.
  • MemORAI adds provenance tracking via a provenance-enriched multi-relational graph that records factual origins at the turn level.
  • For retrieval, it uses query-adaptive subgraph selection with Dynamic Weighted PageRank, weighting graph edges based on the current query context.
  • Experiments on LOCOMO and LongMemEval show state-of-the-art results for memory retrieval and personalized response generation, highlighting selective storage and adaptive retrieval as key to coherent agents.

Abstract

Large Language Models (LLMs) lack persistent memory for long-term personalized conversations. Existing graph-based memory systems suffer from information dilution, absent provenance tracking, and uniform retrieval that ignores query context. We introduce MemORAI (Memory Organization and Retrieval via Adaptive Graph Intelligence), a framework that integrates three innovations: selective memory filtering with dual-layer compression to retain user-persona-relevant content, a provenance-enriched multi-relational graph tracking factual origins at the turn level, and query-adaptive subgraph retrieval with Dynamic Weighted PageRank that applies query-conditioned edge weighting. Evaluated on LOCOMO and LongMemEval benchmarks, MemORAI achieves state-of-the-art performance in memory retrieval and personalized response generation, demonstrating that selective storage, enriched representation, and adaptive retrieval are essential for coherent, personalized LLM agents.