Last-Iterate Guarantees for Learning in Co-coercive Games

arXiv stat.ML / 4/22/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proves finite-time “last-iterate” performance guarantees for vanilla stochastic gradient descent (SGD) in co-coercive games with noisy feedback.
  • It extends the game class beyond strongly monotone games by covering scenarios with multiple Nash equilibria, including certain quadratic games and potential games.
  • Unlike prior work that assumes relative noise diminishing near equilibrium, it allows a more realistic noise model where the noise second moment can grow affinely with the squared norm of the iterates.
  • Under this general non-vanishing noise setting, the authors derive a last-iterate bound of O(log(t)/t^{1/3}) and show almost sure convergence of iterates to the Nash equilibrium set, along with time-average convergence results.

Abstract

We establish finite-time last-iterate guarantees for vanilla stochastic gradient descent in co-coercive games under noisy feedback. This is a broad class of games that is more general than strongly monotone games, allows for multiple Nash equilibria, and includes examples such as quadratic games with negative semidefinite interaction matrices and potential games with smooth concave potentials. Prior work in this setting has relied on relative noise models, where the noise vanishes as iterates approach equilibrium, an assumption that is often unrealistic in practice. We work instead under a substantially more general noise model in which the second moment of the noise is allowed to scale affinely with the squared norm of the iterates, an assumption natural in learning with unbounded action spaces. Under this model, we prove a last-iterate bound of order O(\log(t)/t^{1/3}), the first such bound for co-coercive games under non-vanishing noise. We additionally establish almost sure convergence of the iterates to the set of Nash equilibria and derive time-average convergence guarantees.