Bayesian Learning in Episodic Zero-Sum Games

arXiv cs.LG / 2026/3/24

💬 オピニオンIdeas & Deep AnalysisModels & Research

要点

  • The paper studies Bayesian learning in episodic, finite-horizon zero-sum Markov games where both transition dynamics and rewards are unknown, and players update beliefs over a model of the game.
  • It proposes a posterior sampling approach where each episode begins with independently sampling a game model from the agent’s posterior and then computing an equilibrium policy for that sampled model.
  • The authors analyze two adversarial learning scenarios—both players using posterior sampling, and only one player using posterior sampling while the opponent follows an arbitrary learning method.
  • The main contribution is a sublinear expected regret bound of order O(HS√(ABHK log(SABHK))) for the posterior-sampling agent, defined against equilibrium performance in the true underlying game.
  • Experiments in a grid-world predator–prey setting show sublinear regret scaling and indicate that posterior sampling can outperform a fictitious-play baseline in practice.

Abstract

We study Bayesian learning in episodic, finite-horizon zero-sum Markov games with unknown transition and reward models. We investigate a posterior algorithm in which each player maintains a Bayesian posterior over the game model, independently samples a game model at the beginning of each episode, and computes an equilibrium policy for the sampled model. We analyze two settings: (i) Both players use the posterior sampling algorithm, and (ii) Only one player uses posterior sampling while the opponent follows an arbitrary learning algorithm. In each setting, we provide guarantees on the expected regret of the posterior sampling agent. Our notion of regret compares the expected total reward of the learning agent against the expected total reward under equilibrium policies of the true game. Our main theoretical result is an expected regret bound for the posterior sampling agent of order O(HS\sqrt{ABHK\log(SABHK)}) where K is the number of episodes, H is the episode length, S is the number of states, and A,B are the action space sizes of the two players. Experiments in a grid-world predator--prey domain illustrate the sublinear regret scaling and show that posterior sampling competes favorably with a fictitious-play baseline.