Drawing on Memory: Dual-Trace Encoding Improves Cross-Session Recall in LLM Agents
arXiv cs.AI / 4/15/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that persistent-memory LLM agents often store information as flat facts, limiting temporal reasoning, change tracking, and cross-session aggregation.
- It proposes “dual-trace encoding,” where each stored fact is paired with a concrete scene trace (a narrative reconstruction of when and under what context the information was learned) to make memories more distinctive.
- Experiments on the LongMemEval-S benchmark (4,575 sessions, 100 recall questions) show dual-trace outperforms a fact-only control, achieving 73.7% vs 53.5% overall accuracy (+20.2 pp, statistically significant).
- The improvement is concentrated in temporal reasoning (+40 pp), knowledge-update tracking (+25 pp), and multi-session aggregation (+30 pp), with no gain for single-session retrieval, aligning with encoding specificity theory.
- Token-level analysis indicates the accuracy gains come without additional token cost, and the authors outline an approach to adapt the method to coding agents with preliminary pilot results.
Related Articles

Black Hat Asia
AI Business

The Complete Guide to Better Meeting Productivity with AI Note-Taking
Dev.to

5 Ways Real-Time AI Can Boost Your Sales Call Performance
Dev.to

RAG in Practice — Part 4: Chunking, Retrieval, and the Decisions That Break RAG
Dev.to
Why dynamically routing multi-timescale advantages in PPO causes policy collapse (and a simple decoupled fix) [R]
Reddit r/MachineLearning