AI Navigate

AI Planning Framework for LLM-Based Web Agents

arXiv cs.AI / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper formalizes web-based tasks as sequential decision-making problems and provides a taxonomy that maps LLM agent architectures to classical planning paradigms.
  • It aligns Step-by-Step with BFS, Tree Search with Best-First Tree Search, and Full-Plan-in-Advance with DFS to enable principled diagnosis of failures such as context drift and incoherent task decomposition.
  • It proposes five novel evaluation metrics for trajectory quality and introduces a new dataset of 794 human-labeled trajectories from the WebArena benchmark.
  • Empirical results show Step-by-Step agents align more with human trajectories (38% overall success) while Full-Plan-in-Advance excels in technical measures like element accuracy (89%), underscoring the need to choose architectures based on application constraints.

Abstract

Developing autonomous agents for web-based tasks is a core challenge in AI. While Large Language Model (LLM) agents can interpret complex user requests, they often operate as black boxes, making it difficult to diagnose why they fail or how they plan. This paper addresses this gap by formally treating web tasks as sequential decision-making processes. We introduce a taxonomy that maps modern agent architectures to traditional planning paradigms: Step-by-Step agents to Breadth-First Search (BFS), Tree Search agents to Best-First Tree Search, and Full-Plan-in-Advance agents to Depth-First Search (DFS). This framework allows for a principled diagnosis of system failures like context drift and incoherent task decomposition. To evaluate these behaviors, we propose five novel evaluation metrics that assess trajectory quality beyond simple success rates. We support this analysis with a new dataset of 794 human-labeled trajectories from the WebArena benchmark. Finally, we validate our evaluation framework by comparing a baseline Step-by-Step agent against a novel Full-Plan-in-Advance implementation. Our results reveal that while the Step-by-Step agent aligns more closely with human gold trajectories (38% overall success), the Full-Plan-in-Advance agent excels in technical measures such as element accuracy (89%), demonstrating the necessity of our proposed metrics for selecting appropriate agent architectures based on specific application constraints.