Bilevel Optimization of Agent Skills via Monte Carlo Tree Search

arXiv cs.AI / 4/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how to systematically optimize LLM agent “skills,” which are structured sets of instructions, tools, and supporting resources that strongly affect task performance.
  • It formulates skill design as a bilevel optimization problem, jointly handling skill structure selection and the content of each component.
  • The proposed framework uses Monte Carlo Tree Search in an outer loop to choose the skill structure, while an inner loop optimizes component content within that chosen structure.
  • Both optimization loops leverage LLMs to guide the search and refinement process, aiming to manage the highly coupled decision space.
  • Experiments on an open-source Operations Research question-answering dataset show improved agent performance when using the optimized skills.

Abstract

Agent \texttt{skills} are structured collections of instructions, tools, and supporting resources that help large language model (LLM) agents perform particular classes of tasks. Empirical evidence shows that the design of \texttt{skills} can materially affect agent task performance, yet systematically optimizing \texttt{skills} remains challenging. Since a \texttt{skill} comprises instructions, tools, and supporting resources in a structured way, optimizing it requires jointly determining both the structure of these components and the content each component contains. This gives rise to a complex decision space with strong interdependence across structure and components. We therefore represent these two coupled decisions as \texttt{skill} structure and component content, and formulate \texttt{skill} optimization as a bilevel optimization problem. We propose a bilevel optimization framework in which an outer loop employs Monte Carlo Tree Search to determine the \texttt{skill} structure, while an inner loop refines the component content within the structure selected by the outer loop. In both loops, we employ LLMs to assist the optimization procedure. We evaluate the proposed framework on an open-source Operations Research Question Answering dataset, and the experimental results suggest that the bilevel optimization framework improves the performance of the agents with the optimized \texttt{skill}.