RunAgent: Interpreting Natural-Language Plans with Constraint-Guided Execution

arXiv cs.LG / 5/4/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • RunAgent is a multi-agent platform for executing natural-language plans with improved reliability by enforcing step-by-step execution using constraints and rubrics.
  • It introduces an agentic language with explicit control constructs such as IF, GOTO, and FORALL, combining the flexibility of natural language with more deterministic, program-like control.
  • For each step, RunAgent not only verifies the step’s syntactic and semantic output but also autonomously derives and validates relevant constraints from the task description and the specific instance.
  • The system dynamically chooses among LLM-based reasoning, tool use, and code generation/execution (e.g., Python), and includes error-correction mechanisms to maintain correctness.
  • Experiments on Natural-plan and SciBench datasets show RunAgent outperforms baseline LLMs and existing state-of-the-art PlanGEN approaches.

Abstract

Humans solve problems by executing targeted plans, yet large language models (LLMs) remain unreliable for structured workflow execution. We propose RunAgent, a multi-agent plan execution platform that interprets natural-language plans while enforcing stepwise execution through constraints and rubrics. RunAgent bridges the expressiveness of natural language with the determinism of programming via an agentic language with explicit control constructs (e.g., \texttt{IF}, \texttt{GOTO}, \texttt{FORALL}). Beyond verifying syntactic and semantic verification of the step output, which is performed based on the specific instruction of each step, RunAgent autonomously derives and validates constraints based on the description of the task and its instance at each step. RunAgent also dynamically selects among LLM-based reasoning, tool usage, and code generation and execution (e.g., in Python), and incorporates error correction mechanisms to ensure correctness. Finally, RunAgent filters the context history by retaining only relevant information during the execution of each step. Evaluations on Natural-plan and SciBench Datasets demonstrate that RunAgent outperforms baseline LLMs and state-of-the-art PlanGEN methods.