Small Model as Master Orchestrator: Learning Unified Agent-Tool Orchestration with Parallel Subtask Decomposition

arXiv cs.AI / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses limitations of existing multi-agent orchestration approaches that use static workflows or serial scheduling and struggle with heterogeneous tool/agent interfaces.
  • It introduces “Agent-as-Tool,” a unified parallel orchestration framework that normalizes protocols and uses explicit state feedback, treating both agents and tools as elements in a standardized, learnable action space.
  • Based on this paradigm, the authors train a lightweight orchestrator called ParaManager that separates planning from subtask solving and supports state-aware parallel decomposition, delegation, and asynchronous execution.
  • The training uses a two-stage pipeline combining supervised fine-tuning with recovery mechanisms and reinforcement learning to balance task success, protocol compliance, diversity, and reasoning efficiency.
  • Experiments indicate that ParaManager performs strongly on multiple benchmarks and generalizes robustly to previously unseen model pools.

Abstract

Multi-agent systems (MAS) demonstrate clear advantages in tackling complex problems by coordinating diverse agents and external tools. However, most existing orchestration methods rely on static workflows or serial agent scheduling, and are further constrained by heterogeneous interface protocols between tools and agents. This leads to high system complexity and poor extensibility. To mitigate these issues, we propose Agent-as-Tool, a unified parallel orchestration paradigm that abstracts both agents and tools into a standardized, learnable action space with protocol normalization and explicit state feedback. Building on this paradigm, we train a lightweight orchestrator, ParaManager, which decouples planning decisions from subtask solving, enabling state-aware parallel subtask decomposition, delegation, and asynchronous execution. For training, we adopt a two-stage ParaManager training pipeline. It improves robustness by incorporating supervised fine-tuning (SFT) trajectories equipped with recovery mechanisms, and further applies reinforcement learning (RL) to achieve an optimal balance among task success, protocol compliance, diversity, and reasoning efficiency. Experiments show that ParaManager achieves strong performance across multiple benchmarks and exhibits robust generalization under unseen model pools.