StratMem-Bench: Evaluating Strategic Memory Use in Virtual Character Conversation Beyond Factual Recall

arXiv cs.AI / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that realistic virtual character conversations require strategic memory use, not just factual memorization and recall.
  • It introduces StratMem-Bench, a new benchmark (657 instances) where virtual characters must choose from heterogeneous memory pools containing required, supportive, and irrelevant memories.
  • The authors propose evaluation metrics (e.g., Strict Memory Compliance, Memory Integration Quality, Proactive Enrichment Score, and Conditional Irrelevance Rate) to measure how well characters deploy memory dynamically.
  • Experiments with state-of-the-art large language models show strong performance in separating required vs. irrelevant memories, but notable difficulty when supportive memories are involved.
  • Overall, the benchmark targets a gap in existing memory-related evaluations that treat memory mainly as a static fact store rather than a decision-making resource.

Abstract

Achieving realistic human-like conversation for virtual characters requires not only a simple memorization and recall of past events, but also the strategic utilization of memory to meet factual needs and social engagement. Current memory utilization relevant (e.g., memory-augmented generation, long-term dialogue, and etc.) benchmarks overlook this nuance, treating memory primarily as a static repository of facts rather than a dynamic resource to be strategically deployed in dialogues. To address this gap, we design StratMem-Bench, a new benchmark to evaluate strategic memory use in character-centric dialogues. This dataset comprises 657 instances where virtual characters must navigate heterogeneous memory pools containing required, supportive, and irrelevant memories. We also propose a framework with different evaluation metrics including Strict Memory Compliance, Memory Integration Quality, Proactive Enrichment Score and Conditional Irrelevance Rate, to evaluate strategic memory use capabilities of virtual characters. Experiments on StratMem-Bench which leverage the state-of-the-art large language models as virtual characters show that all models perform well at distinguishing between required and irrelevant memories, but struggle once supportive memories are introduced into the decision process.