Reproducing AlphaZero on Tablut: Self-Play RL for an Asymmetric Board Game

arXiv cs.LG / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper explores adapting AlphaZero-style self-play reinforcement learning to Tablut, a highly asymmetric board game with different objectives and unequal piece counts for attacker vs. defender.
  • It argues that the standard AlphaZero single policy/value head setup struggles in asymmetric settings because it forces the network to learn conflicting evaluation functions.
  • To improve learning, the authors modify the architecture to use separate policy and value heads per player role while sharing a residual trunk to capture common board representations.
  • Training proved unstable due to catastrophic forgetting between roles, and the study mitigates this with C4 data augmentation, a larger replay buffer, and a checkpoint-mixing strategy (playing 25% of games versus past checkpoints).
  • After 100 self-play iterations, the model shows steady improvement, reaching a BayesElo of 1235 versus a random-initialization baseline, with training metrics indicating more decisive play as policy entropy drops.

Abstract

This work investigates the adaptation of the AlphaZero reinforcement learning algorithm to Tablut, an asymmetric historical board game featuring unequal piece counts and distinct player objectives (king capture versus king escape). While the original AlphaZero architecture successfully leverages a single policy and value head for symmetric games, applying it to asymmetric environments forces the network to learn two conflicting evaluation functions, which can hinder learning efficiency and performance. To address this, the core architecture is modified to use separate policy and value heads for each player role, while maintaining a shared residual trunk to learn common board features. During training, the asymmetric structure introduced training instabilities, notably catastrophic forgetting between the attacker and defender roles. These issues were mitigated by applying C4 data augmentation, increasing the replay buffer size, and having the model play 25 percent of training games against randomly sampled past checkpoints. Over 100 self-play iterations, the modified model demonstrated steady improvement, achieving a BayesElo rating of 1235 relative to a randomly initialized baseline. Training metrics also showed a significant decrease in policy entropy and average remaining pieces, reflecting increasingly focused and decisive play. Ultimately, the experiments confirm that AlphaZero's self-play framework can transfer to highly asymmetric games, provided that distinct policy/value heads and robust stabilization techniques are employed.