Skill Retrieval Augmentation for Agentic AI

arXiv cs.CL / 4/28/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that agentic LLMs need reusable external skills beyond what can fit in their context window, and that explicitly enumerating skills does not scale as skill corpora grow.
  • It proposes Skill Retrieval Augmentation (SRA), a paradigm where agents dynamically retrieve, incorporate, and apply relevant skills from large external skill corpora as needed.
  • The work introduces SRA-Bench and a large skill corpus (26,262 skills), including 5,400 capability-intensive test instances and 636 gold skills mixed with distractors to evaluate the full SRA pipeline.
  • Experiments show retrieval-based skill augmentation can significantly improve agent performance, but they also reveal a key gap: agents often load skills at similar rates even when a task does not require external capabilities.
  • The authors conclude that the main bottleneck is not only retrieval quality, but also the base model’s ability to decide which skills to load and when external loading is truly necessary.

Abstract

As large language models (LLMs) evolve into agentic problem solvers, they increasingly rely on external, reusable skills to handle tasks beyond their native parametric capabilities. In existing agent systems, the dominant strategy for incorporating skills is to explicitly enumerate available skills within the context window. However, this strategy fails to scale: as skill corpora expand, context budgets are consumed rapidly, and the agent becomes markedly less accurate in identifying the right skill. To this end, this paper formulates Skill Retrieval Augmentation (SRA), a new paradigm in which agents dynamically retrieve, incorporate, and apply relevant skills from large external skill corpora on demand. To make this problem measurable, we construct a large-scale skill corpus and introduce SRA-Bench, the first benchmark for decomposed evaluation of the full SRA pipeline, covering skill retrieval, skill incorporation, and end-task execution. SRA-Bench contains 5,400 capability-intensive test instances and 636 manually constructed gold skills, which are mixed with web-collected distractor skills to form a large-scale corpus of 26,262 skills. Extensive experiments show that retrieval-based skill augmentation can substantially improve agent performance, validating the promise of the paradigm. At the same time, we uncover a fundamental gap in skill incorporation: current LLM agents tend to load skills at similar rates, regardless of whether a gold skill is retrieved or whether the task actually requires external capabilities. This shows that the bottleneck in skill augmentation lies not only in retrieval but also in the base model's ability to determine which skill to load and when external loading is actually needed. These findings position SRA as a distinct research problem and establish a foundation for the scalable augmentation of capabilities in future agent systems.