AI Navigate

Strategically Robust Multi-Agent Reinforcement Learning with Linear Function Approximation

arXiv cs.LG / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the challenge of computing robust equilibria in general-sum Markov games using multi-agent reinforcement learning, focusing on the limitations of Nash equilibrium due to computational intractability and sensitivity to approximations.
  • It introduces Risk-Sensitive Quantal Response Equilibrium (RQRE) and proposes the RQRE-OVI algorithm, which uses optimistic value iteration combined with linear function approximation for scalable equilibrium computation in large or continuous state spaces.
  • Theoretical analysis reveals a tradeoff between rationality and risk sensitivity, where increasing rationality improves regret bounds, while risk sensitivity provides regularization that enhances stability and robustness, positioning RQRE on a Pareto frontier balancing performance and robustness.
  • The RQRE policy map is shown to be Lipschitz continuous, contrasting with Nash equilibrium, and admits a distributionally robust optimization interpretation, indicating improved stability against payoff estimation errors.
  • Empirical results demonstrate that RQRE-OVI performs competitively in self-play and yields significantly more robust behavior in cross-play settings compared to Nash-based methods, suggesting it is a principled and tunable approach for robust equilibrium learning.

Computer Science > Machine Learning

arXiv:2603.09208 (cs)
[Submitted on 10 Mar 2026]

Title:Strategically Robust Multi-Agent Reinforcement Learning with Linear Function Approximation

View a PDF of the paper titled Strategically Robust Multi-Agent Reinforcement Learning with Linear Function Approximation, by Jake Gonzales and 3 other authors
View PDF HTML (experimental)
Abstract:Provably efficient and robust equilibrium computation in general-sum Markov games remains a core challenge in multi-agent reinforcement learning. Nash equilibrium is computationally intractable in general and brittle due to equilibrium multiplicity and sensitivity to approximation error. We study Risk-Sensitive Quantal Response Equilibrium (RQRE), which yields a unique, smooth solution under bounded rationality and risk sensitivity. We propose \texttt{RQRE-OVI}, an optimistic value iteration algorithm for computing RQRE with linear function approximation in large or continuous state spaces. Through finite-sample regret analysis, we establish convergence and explicitly characterize how sample complexity scales with rationality and risk-sensitivity parameters. The regret bounds reveal a quantitative tradeoff: increasing rationality tightens regret, while risk sensitivity induces regularization that enhances stability and robustness. This exposes a Pareto frontier between expected performance and robustness, with Nash recovered in the limit of perfect rationality and risk neutrality. We further show that the RQRE policy map is Lipschitz continuous in estimated payoffs, unlike Nash, and RQRE admits a distributionally robust optimization interpretation. Empirically, we demonstrate that \texttt{RQRE-OVI} achieves competitive performance under self-play while producing substantially more robust behavior under cross-play compared to Nash-based approaches. These results suggest \texttt{RQRE-OVI} offers a principled, scalable, and tunable path for equilibrium learning with improved robustness and generalization.
Subjects: Machine Learning (cs.LG); Computer Science and Game Theory (cs.GT); Multiagent Systems (cs.MA)
Cite as: arXiv:2603.09208 [cs.LG]
  (or arXiv:2603.09208v1 [cs.LG] for this version)
  https://doi.org/10.48550/arXiv.2603.09208
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Jake Gonzales [view email]
[v1] Tue, 10 Mar 2026 05:24:10 UTC (3,568 KB)
Full-text links:

Access Paper:

    View a PDF of the paper titled Strategically Robust Multi-Agent Reinforcement Learning with Linear Function Approximation, by Jake Gonzales and 3 other authors
  • View PDF
  • HTML (experimental)
  • TeX Source
Current browse context:
cs.LG
< prev   |   next >
Change to browse by:

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.