AI Navigate

STAIRS-Former: Spatio-Temporal Attention with Interleaved Recursive Structure Transformer for Offline Multi-task Multi-agent Reinforcement Learning

arXiv cs.AI / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • STAIRS-Former is a transformer architecture for offline multi-agent reinforcement learning that uses spatial and temporal hierarchies to enhance cross-agent attention and capture long-horizon interaction histories.
  • The model enables inter-agent coordination by leveraging attention over critical tokens and an interleaved recursive structure, addressing varying agent counts across tasks.
  • Token dropout is introduced to improve robustness and generalization when facing different agent populations.
  • Extensive experiments on SMAC, SMAC-v2, MPE, and MaMuJoCo demonstrate state-of-the-art performance across diverse multi-task benchmarks.
  • This work advances offline MARL in partially observable, multi-task settings by improving generalization and coordination among agents.

Abstract

Offline multi-agent reinforcement learning (MARL) with multi-task datasets is challenging due to varying numbers of agents across tasks and the need to generalize to unseen scenarios. Prior works employ transformers with observation tokenization and hierarchical skill learning to address these issues. However, they underutilize the transformer attention mechanism for inter-agent coordination and rely on a single history token, which limits their ability to capture long-horizon temporal dependencies in partially observable MARL settings. In this paper, we propose STAIRS-Former, a transformer architecture augmented with spatial and temporal hierarchies that enables effective attention over critical tokens while capturing long interaction histories. We further introduce token dropout to enhance robustness and generalization across varying agent populations. Extensive experiments on diverse multi-agent benchmarks, including SMAC, SMAC-v2, MPE, and MaMuJoCo, with multi-task datasets demonstrate that STAIRS-Former consistently outperforms prior methods and achieves new state-of-the-art performance.