Large Language Model Guided Incentive Aware Reward Design for Cooperative Multi-Agent Reinforcement Learning

arXiv cs.LG / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how to automatically design auxiliary rewards for cooperative multi-agent reinforcement learning to avoid incentive misalignment and poor coordination when feedback is sparse.
  • It proposes an LLM-guided framework that generates executable reward programs from environment instrumentation, restricting them to a formally valid search space.
  • Candidate reward programs are selected by training multi-agent policies from scratch under a fixed compute budget and choosing the one that maximizes sparse task returns.
  • Experiments on four Overcooked-AI layouts show iterative search generations improve task returns and delivery counts, with the largest benefits in interaction-bottleneck-heavy settings.
  • Analysis of the learned shaping components suggests the method produces more interdependent action selection and better-aligned coordination signals than typical manual reward engineering.

Abstract

Designing effective auxiliary rewards for cooperative multi-agent systems remains a precarious task; misaligned incentives risk inducing suboptimal coordination, especially where sparse task feedback fails to provide sufficient grounding. This study introduces an automated reward design framework that leverages large language models to synthesize executable reward programs from environment instrumentation. The procedure constrains candidate programs within a formal validity envelope and evaluates their efficacy by training policies from scratch under a fixed computational budget; selection depends exclusively on the sparse task return. The framework is evaluated across four distinct Overcooked-AI layouts characterized by varied corridor congestion, handoff dependencies, and structural asymmetries. Iterative search generations consistently yield superior task returns and delivery counts, with the most pronounced gains occurring in environments dominated by interaction bottlenecks. Diagnostic analysis of the synthesized shaping components indicates increased interdependence in action selection and improved signal alignment in coordination-intensive tasks. These results demonstrate that the search for objectivegrounded reward programs can mitigate the burden of manual engineering while producing shaping signals compatible with cooperative learning under finite budgets.