AI Navigate

A Semi-Decentralized Approach to Multiagent Control

arXiv cs.AI / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a semi-decentralized framework that combines semi-Markov control with communication uncertainty and extends it to POMDPs, yielding the SDec-POMDP model.
  • It presents RS-SDA*, an exact algorithm for generating optimal policies under the SDec-POMDP formulation.
  • SDec-POMDP unifies decentralized and multiagent POMDPs and encompasses several existing explicit communication mechanisms.
  • The authors evaluate the approach on semi-decentralized versions of standard benchmarks and a maritime medical evacuation scenario, demonstrating practical applicability.
  • The work provides a rigorous theoretical foundation for exploring a broad class of multiagent communication problems through semi-decentralization.

Abstract

We introduce an expressive framework and algorithms for the semi-decentralized control of cooperative agents in environments with communication uncertainty. Whereas semi-Markov control admits a distribution over time for agent actions, semi-Markov communication, or what we refer to as semi-decentralization, gives a distribution over time for what actions and observations agents can store in their histories. We extend semi-decentralization to the partially observable Markov decision process (POMDP). The resulting SDec-POMDP unifies decentralized and multiagent POMDPs and several existing explicit communication mechanisms. We present recursive small-step semi-decentralized A* (RS-SDA*), an exact algorithm for generating optimal SDec-POMDP policies. RS-SDA* is evaluated on semi-decentralized versions of several standard benchmarks and a maritime medical evacuation scenario. This paper provides a well-defined theoretical foundation for exploring many classes of multiagent communication problems through the lens of semi-decentralization.