Agent-GWO: Collaborative Agents for Dynamic Prompt Optimization in Large Language Models

arXiv cs.AI / 4/22/2026

💬 OpinionDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper introduces Agent-GWO, a dynamic framework that optimizes both LLM prompt templates and decoding hyperparameters together rather than using a single-agent local search approach.
  • It treats prompts and decoding settings as inheritable “agent configurations” and uses a leader–follower scheme from the Grey Wolf Optimizer (GWO) with three leaders (α, β, δ) to guide iterative updates.
  • The method targets the problem that manual static prompts and decoding choices can cause performance fluctuations and limited transferability across tasks and model backbones.
  • Experiments on multiple mathematical and hybrid reasoning benchmarks across diverse LLM backbones show improved accuracy and stability compared with existing prompt optimization methods.
  • The authors state that the code will be publicly released, enabling others to apply and evaluate the framework.

Abstract

Large Language Models (LLMs) have demonstrated strong capabilities in complex reasoning tasks, while recent prompting strategies such as Chain-of-Thought (CoT) have further elevated their performance in handling complex logical problems. Despite these advances, high-quality reasoning remains heavily reliant on manual static prompts and is sensitive to decoding configurations and task distributions, leading to performance fluctuations and limited transferability. Existing automatic prompt optimization methods typically adopt single-agent local search, failing to simultaneously optimize prompts and decoding hyperparameters within a unified framework to achieve stable global improvements. To address this limitation, we propose Agent-GWO, a dynamic prompt optimization framework for complex reasoning. Specifically, we unify prompt templates and decoding hyperparameters as inheritable agent configurations. By leveraging the leader-follower mechanism of the Grey Wolf Optimizer (GWO), we automatically select three leader agents (\alpha, \beta, and \delta) to guide the collaborative updates of the remaining agents, enabling iterative convergence toward robust optimal reasoning configurations that can be seamlessly integrated for inference. Extensive experiments on multiple mathematical and hybrid reasoning benchmarks across diverse LLM backbones show that Agent-GWO consistently improves accuracy and stability over existing prompt optimization methods. The code will be released publicly.