AI Navigate

MEMO: Memory-Augmented Model Context Optimization for Robust Multi-Turn Multi-Agent LLM Games

arXiv cs.AI / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • Multi-turn, multi-agent large language model (LLM) game evaluations face significant run-to-run variance due to compounded deviations and prompt-induced policy differences, leading to unreliable rankings.
  • The MEMO framework introduces a memory-augmented self-play approach that optimizes inference-time context by combining retention of persistent memory with exploration through tournament-style prompt evolution using uncertainty-aware TrueSkill ranking.
  • MEMO significantly improves mean win rates for GPT-4o-mini and Qwen-2.5-7B-Instruct across five text-based games, while also reducing variance and stabilizing performance rankings across prompt variations.
  • The approach is particularly effective in negotiation and imperfect-information games, suggesting substantial potential for enhancing multi-agent LLM robustness and performance via context optimization.
  • MEMO's results highlight that reinforcement learning still performs better in perfect-information games, indicating complementary strengths between methods.

Computer Science > Artificial Intelligence

arXiv:2603.09022 (cs)
[Submitted on 9 Mar 2026]

Title:MEMO: Memory-Augmented Model Context Optimization for Robust Multi-Turn Multi-Agent LLM Games

View a PDF of the paper titled MEMO: Memory-Augmented Model Context Optimization for Robust Multi-Turn Multi-Agent LLM Games, by Yunfei Xie and 11 other authors
View PDF HTML (experimental)
Abstract:Multi-turn, multi-agent LLM game evaluations often exhibit substantial run-to-run variance. In long-horizon interactions, small early deviations compound across turns and are amplified by multi-agent coupling. This biases win rate estimates and makes rankings unreliable across repeated tournaments. Prompt choice worsens this further by producing different effective policies. We address both instability and underperformance with MEMO (Memory-augmented MOdel context optimization), a self-play framework that optimizes inference-time context by coupling retention and exploration. Retention maintains a persistent memory bank that stores structured insights from self-play trajectories and injects them as priors during later play. Exploration runs tournament-style prompt evolution with uncertainty-aware selection via TrueSkill, and uses prioritized replay to revisit rare and decisive states. Across five text-based games, MEMO raises mean win rate from 25.1% to 49.5% for GPT-4o-mini and from 20.9% to 44.3% for Qwen-2.5-7B-Instruct, using $2,000$ self-play games per task. Run-to-run variance also drops, giving more stable rankings across prompt variations. These results suggest that multi-agent LLM game performance and robustness have substantial room for improvement through context optimization. MEMO achieves the largest gains in negotiation and imperfect-information games, while RL remains more effective in perfect-information settings.
Subjects: Artificial Intelligence (cs.AI)
Cite as: arXiv:2603.09022 [cs.AI]
  (or arXiv:2603.09022v1 [cs.AI] for this version)
  https://doi.org/10.48550/arXiv.2603.09022
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Kevin Wang [view email]
[v1] Mon, 9 Mar 2026 23:36:32 UTC (2,130 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled MEMO: Memory-Augmented Model Context Optimization for Robust Multi-Turn Multi-Agent LLM Games, by Yunfei Xie and 11 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.AI
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.