Finite-Time Analysis of Q-Value Iteration for General-Sum Stackelberg Games

arXiv cs.LG / 4/7/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper provides a finite-time convergence analysis for Stackelberg Q-value iteration in two-player general-sum Markov games, addressing a gap in multi-agent RL theory beyond single-agent settings.
  • It introduces a relaxed policy condition specific to the Stackelberg interaction structure and formulates the learning process as a switching system.
  • Using upper and lower comparison systems, the authors derive finite-time error bounds for the learned Q-functions and describe their convergence behavior.
  • The work reframes Stackelberg learning through a control-theoretic lens and claims to be the first to offer finite-time convergence guarantees for Q-value iteration in general-sum Markov games under Stackelberg interactions.

Abstract

Reinforcement learning has been successful both empirically and theoretically in single-agent settings, but extending these results to multi-agent reinforcement learning in general-sum Markov games remains challenging. This paper studies the convergence of Stackelberg Q-value iteration in two-player general-sum Markov games from a control-theoretic perspective. We introduce a relaxed policy condition tailored to the Stackelberg setting and model the learning dynamics as a switching system. By constructing upper and lower comparison systems, we establish finite-time error bounds for the Q-functions and characterize their convergence properties. Our results provide a novel control-theoretic perspective on Stackelberg learning. Moreover, to the best of the authors' knowledge, this paper offers the first finite-time convergence guarantees for Q-value iteration in general-sum Markov games under Stackelberg interactions.