StructMem: Structured Memory for Long-Horizon Behavior in LLMs

arXiv cs.CL / 4/24/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces StructMem, a structure-enriched hierarchical memory framework designed for long-horizon conversational agents to capture relationships between events rather than isolated facts.
  • It tackles an existing trade-off where flat memories are efficient but miss relational structure, while graph-based memories support structured reasoning but are expensive and brittle to build.
  • StructMem preserves event-level bindings, adds cross-event connections, temporally anchors dual perspectives, and performs periodic semantic consolidation to improve temporal reasoning.
  • Experiments on LoCoMo show gains in temporal reasoning and multi-hop question answering, alongside substantial reductions in token usage, API calls, and runtime versus prior memory systems.
  • The work includes an open-source repository at https://github.com/zjunlp/LightMem, suggesting practical availability for testing and further development.

Abstract

Long-term conversational agents need memory systems that capture relationships between events, not merely isolated facts, to support temporal reasoning and multi-hop question answering. Current approaches face a fundamental trade-off: flat memory is efficient but fails to model relational structure, while graph-based memory enables structured reasoning at the cost of expensive and fragile construction. To address these issues, we propose \textbf{StructMem}, a structure-enriched hierarchical memory framework that preserves event-level bindings and induces cross-event connections. By temporally anchoring dual perspectives and performing periodic semantic consolidation, StructMem improves temporal reasoning and multi-hop performance on \texttt{LoCoMo}, while substantially reducing token usage, API calls, and runtime compared to prior memory systems, see https://github.com/zjunlp/LightMem .