From Static Templates to Dynamic Runtime Graphs: A Survey of Workflow Optimization for LLM Agents

arXiv cs.AI / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The article surveys how LLM agents can construct executable workflows that combine LLM calls, retrieval, tool use, code execution, memory updates, and verification.
  • It frames these workflows as agentic computation graphs (ACGs) and classifies methods by when the workflow structure is decided (static vs dynamic), what is optimized, and what signals guide optimization (task metrics, verifier signals, preferences, or trace feedback).
  • It distinguishes reusable workflow templates from run-specific realized graphs and from execution traces, helping separate design-time decisions from what actually happens at runtime.
  • The survey proposes a structure-aware evaluation approach that goes beyond task success metrics to include graph properties, execution cost, robustness, and how structural variation changes across inputs.
  • The stated aim is to provide a shared vocabulary and unified framework to improve comparability, reproducibility, and evaluation standards for future workflow-optimization research.

Abstract

Large language model (LLM)-based systems are becoming increasingly popular for solving tasks by constructing executable workflows that interleave LLM calls, information retrieval, tool use, code execution, memory updates, and verification. This survey reviews recent methods for designing and optimizing such workflows, which we treat as agentic computation graphs (ACGs). We organize the literature based on when workflow structure is determined, where structure refers to which components or agents are present, how they depend on each other, and how information flows between them. This lens distinguishes static methods, which fix a reusable workflow scaffold before deployment, from dynamic methods, which select, generate, or revise the workflow for a particular run before or during execution. We further organize prior work along three dimensions: when structure is determined, what part of the workflow is optimized, and which evaluation signals guide optimization (e.g., task metrics, verifier signals, preferences, or trace-derived feedback). We also distinguish reusable workflow templates, run-specific realized graphs, and execution traces, separating reusable design choices from the structures actually deployed in a given run and from realized runtime behavior. Finally, we outline a structure-aware evaluation perspective that complements downstream task metrics with graph-level properties, execution cost, robustness, and structural variation across inputs. Our goal is to provide a clear vocabulary, a unified framework for positioning new methods, a more comparable view of existing body of literature, and a more reproducible evaluation standard for future work in workflow optimizations for LLM agents.