Memory-Driven Role-Playing: Evaluation and Enhancement of Persona Knowledge Utilization in LLMs
arXiv cs.AI / 3/23/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper identifies the challenge of maintaining faithful, consistent persona characterization in long open-ended dialogues and proposes Memory-Driven Role-Playing (MRP) where persona knowledge is treated as an internal memory store retrieved from dialogue context.
- It introduces MREval, MRPrompt, and MRBench (a bilingual Chinese/English benchmark) to diagnose and enhance four memory-driven abilities: Anchoring, Recalling, Bounding, and Enacting.
- Experimental results show that MRPrompt enables small models (e.g., Qwen3-8B) to match the performance of larger closed-source LLMs (e.g., Qwen3-Max, GLM-4.7), demonstrating memory-focused prompting can boost efficiency.
- The work highlights that upstream memory gains improve downstream response quality and provides a comprehensive diagnostic suite across 12 LLMs.
Related Articles
Is AI becoming a bubble, and could it end like the dot-com crash?
Reddit r/artificial

Externalizing State
Dev.to

I made a 'benchmark' where LLMs write code controlling units in a 1v1 RTS game.
Dev.to

My AI Does Not Have a Clock
Dev.to
How to settle on a coding LLM ? What parameters to watch out for ?
Reddit r/LocalLLaMA