WorkflowGen:an adaptive workflow generation mechanism driven by trajectory experience

arXiv cs.LG / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces WorkflowGen, a framework that generates LLM-agent workflows by reusing “trajectory experiences” instead of rebuilding workflows from scratch for every query.
  • WorkflowGen captures complete execution trajectories early, extracts reusable knowledge (e.g., error fingerprints, tool mappings, parameter schemas, execution paths, and exception-avoidance strategies), and stores it for later use.
  • It uses a closed-loop process that performs lightweight generation only on variable nodes via trajectory rewriting, experience updating, and template induction.
  • A three-tier adaptive routing strategy chooses between direct reuse, rewriting-based generation, or full initialization based on semantic similarity to historical queries.
  • Experiments (without large annotated datasets) report over 40% lower token usage versus real-time planning, and about a 20% success-rate improvement on medium-similarity queries, with better robustness and deployability through modular, traceable experiences.

Abstract

Large language model (LLM) agents often suffer from high reasoning overhead, excessive token consumption, unstable execution, and inability to reuse past experiences in complex tasks like business queries, tool use, and workflow orchestration. Traditional methods generate workflows from scratch for every query, leading to high cost, slow response, and poor robustness. We propose WorkflowGen, an adaptive, trajectory experience-driven framework for automatic workflow generation that reduces token usage and improves efficiency and success rate. Early in execution, WorkflowGen captures full trajectories and extracts reusable knowledge at both node and workflow levels, including error fingerprints, optimal tool mappings, parameter schemas, execution paths, and exception-avoidance strategies. It then employs a closed-loop mechanism that performs lightweight generation only on variable nodes via trajectory rewriting, experience updating, and template induction. A three-tier adaptive routing strategy dynamically selects among direct reuse, rewriting-based generation, and full initialization based on semantic similarity to historical queries. Without large annotated datasets, we qualitatively compare WorkflowGen against real-time planning, static single trajectory, and basic in-context learning baselines. Our method reduces token consumption by over 40 percent compared to real-time planning, improves success rate by 20 percent on medium-similarity queries through proactive error avoidance and adaptive fallback, and enhances deployability via modular, traceable experiences and cross-scenario adaptability. WorkflowGen achieves a practical balance of efficiency, robustness, and interpretability, addressing key limitations of existing approaches.