Co-evolving Agent Architectures and Interpretable Reasoning for Automated Optimization

arXiv cs.AI / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that using LLMs for automated operations research is still constrained by largely hand-crafted reasoning-to-execution workflows that don’t adapt well to complex OR tasks.
  • It introduces EvoOR-Agent, a co-evolutionary framework that models agent workflows as AOE-style networks to make dependencies, workflow topology, and alternative reasoning paths explicit.
  • EvoOR-Agent evolves reasoning “individuals” using graph-mediated, path-conditioned recombination, multi-granularity semantic mutation, and elitist updates to improve optimization performance.
  • A knowledge-base-assisted experience-acquisition module injects reusable OR practices into both initialization and semantic variation, helping the system reuse domain knowledge.
  • Experiments on heterogeneous OR benchmarks show consistent gains over zero-shot LLMs and fixed-pipeline or evolutionary baselines, and ablations suggest that architecture evolution and graph-supported reasoning-trajectory search improve both accuracy and interpretability.

Abstract

Automating operations research (OR) with large language models (LLMs) remains limited by hand-crafted reasoning--execution workflows. Complex OR tasks require adaptive coordination among problem interpretation, mathematical formulation, solver selection, code generation, and iterative debugging. To address this limitation, we propose EvoOR-Agent, a co-evolutionary framework for automated optimization. The framework represents agent workflows as activity-on-edge (AOE)-style networks, making workflow topology, execution dependencies, and alternative reasoning paths explicit. On this representation, the framework maintains an architecture graph and evolves a population of reasoning individuals through graph-mediated path-conditioned recombination, multi-granularity semantic mutation, and elitist population update. A knowledge-base-assisted experience-acquisition module further injects reusable OR practices into initialization and semantic variation. Empirical results on heterogeneous OR benchmarks show that the proposed framework consistently improves over zero-shot LLMs, fixed-pipeline OR agents, and representative evolutionary agent frameworks. Case studies and ablation analyses further indicate that explicit architecture evolution and graph-supported reasoning-trajectory search contribute to both performance improvement and structural interpretability. These results suggest that treating agent architectures and reasoning trajectories as evolvable objects provides an effective route toward adaptive and interpretable automated optimization.