M$^{2}$GRPO: Mamba-based Multi-Agent Group Relative Policy Optimization for Biomimetic Underwater Robots Pursuit

arXiv cs.RO / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces M$^{2}$GRPO, a new multi-agent reinforcement learning framework for cooperative pursuit by biomimetic underwater robots that must handle long-horizon decision making, partial observability, and inter-robot coordination.
  • It combines a selective state-space Mamba policy that uses observation history and attention-based relational features with a group-relative policy optimization approach under the CTDE (centralized training, decentralized execution) paradigm.
  • The method outputs bounded continuous actions via normalized Gaussian sampling, aiming to improve stability while maintaining policy expressiveness.
  • For better credit assignment without destabilizing training, M$^{2}$GRPO normalizes rewards across agents per episode and uses a multi-agent extension of GRPO, reducing training resource requirements.
  • Experiments in both simulations and real-world pool settings show that M$^{2}$GRPO outperforms MAPPO and recurrent baselines in pursuit success rate and capture efficiency across multiple team sizes and evader strategies.

Abstract

Traditional policy learning methods in cooperative pursuit face fundamental challenges in biomimetic underwater robots, where long-horizon decision making, partial observability, and inter-robot coordination require both expressiveness and stability. To address these issues, a novel framework called Mamba-based multi-agent group relative policy optimization (M^{2}GRPO) is proposed, which integrates a selective state-space Mamba policy with group-relative policy optimization under the centralized-training and decentralized-execution (CTDE) paradigm. Specifically, the Mamba-based policy leverages observation history to capture long-horizon temporal dependencies and exploits attention-based relational features to encode inter-agent interactions, producing bounded continuous actions through normalized Gaussian sampling. To further improve credit assignment without sacrificing stability, the group-relative advantages are obtained by normalizing rewards across agents within each episode and optimized through a multi-agent extension of GRPO, significantly reducing the demand for training resources while enabling stable and scalable policy updates. Extensive simulations and real-world pool experiments across team scales and evader strategies demonstrate that M^{2}GRPO consistently outperforms MAPPO and recurrent baselines in both pursuit success rate and capture efficiency. Overall, the proposed framework provides a practical and scalable solution for cooperative underwater pursuit with biomimetic robot systems.