AI Navigate

Code-Space Response Oracles: Generating Interpretable Multi-Agent Policies with Large Language Models

arXiv cs.AI / 3/12/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • CSRO replaces black-box RL oracles with LLMs by generating policies as human-readable code, improving interpretability and trust in multi-agent settings.
  • It reframes best-response computation as a code-generation task and explores zero-shot prompting, iterative refinement, and AlphaEvolve (a distributed LLM-based evolutionary system).
  • The approach achieves competitive performance with baselines while yielding a diverse, explainable set of policies, shifting focus from opaque policy parameters to interpretable algorithmic behavior.
  • By leveraging pretrained LLM knowledge, CSRO can discover complex, human-like strategies that are easier to inspect, debug and reason about.

Abstract

Recent advances in multi-agent reinforcement learning, particularly Policy-Space Response Oracles (PSRO), have enabled the computation of approximate game-theoretic equilibria in increasingly complex domains. However, these methods rely on deep reinforcement learning oracles that produce `black-box' neural network policies, making them difficult to interpret, trust or debug. We introduce Code-Space Response Oracles (CSRO), a novel framework that addresses this challenge by replacing RL oracles with Large Language Models (LLMs). CSRO reframes the best response computation as a code generation task, prompting an LLM to generate policies directly as human-readable code. This approach not only yields inherently interpretable policies but also leverages the LLM's pretrained knowledge to discover complex, human-like strategies. We explore multiple ways to construct and enhance an LLM-based oracle: zero-shot prompting, iterative refinement and \emph{AlphaEvolve}, a distributed LLM-based evolutionary system. We demonstrate that CSRO achieves performance competitive with baselines while producing a diverse set of explainable policies. Our work presents a new perspective on multi-agent learning, shifting the focus from optimizing opaque policy parameters to synthesizing interpretable algorithmic behavior.