The Harder Path: Last Iterate Convergence for Uncoupled Learning in Zero-Sum Games with Bandit Feedback

arXiv cs.LG / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies learning dynamics in zero-sum matrix games under repeated play with bandit (partial) feedback, aiming for “uncoupled” algorithms that require no communication between players.
  • It proves a negative result: ensuring convergence of each player’s policy profiles (last-iterate convergence) to a Nash equilibrium is harder than achieving convergence of average iterates.
  • The authors show the best possible last-iterate exploitability gap rate for uncoupled methods is Ω(T^-1/4), which is slower than the typical Ω(T^-1/2) rate for average-iterate convergence.
  • They introduce two uncoupled algorithms—one based on an exploration/exploitation trade-off and another using a two-step mirror descent regularization—that attain the optimal Ω(T^-1/4) rate up to constants and logarithmic factors.

Abstract

We study the problem of learning in zero-sum matrix games with repeated play and bandit feedback. Specifically, we focus on developing uncoupled algorithms that guarantee, without communication between players, the convergence of the last-iterate to a Nash equilibrium. Although the non-bandit case has been studied extensively, this setting has only been explored recently, with a bound of \mathcal{O}(T^{-1/8}) on the exploitability gap. We show that, for uncoupled algorithms, guaranteeing convergence of the policy profiles to a Nash equilibrium is detrimental to the performance, with the best attainable rate being \Omega(T^{-1/4}) in contrast to the usual \Omega(T^{-1/2}) rate for convergence of the average iterates. We then propose two algorithms that achieve this optimal rate up to constant and logarithmic factors. The first algorithm leverages a straightforward trade-off between exploration and exploitation, while the second employs a regularization technique based on a two-step mirror descent approach.