SKILLFOUNDRY: Building Self-Evolving Agent Skill Libraries from Heterogeneous Scientific Resources

arXiv cs.AI / 4/7/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces SkillFoundry, a self-evolving framework that turns fragmented scientific resources (papers, APIs, scripts, notebooks, docs, databases) into executable, validated agent “skill” packages.
  • SkillFoundry builds a domain knowledge tree, mines high-value branches, extracts operational contracts (inputs/outputs, steps, environment assumptions, provenance, tests), and then expands the skill library through closed-loop validation (expand/repair/merge/prune).
  • The authors report that 71.1% of mined skills differ from existing libraries (e.g., SkillHub, SkillSMP), indicating broader and less redundant coverage than hand-crafted or prior skill sets.
  • Experiments show mined skills improve coding-agent performance on 5 of 6 MoSciBench datasets, and on two genomics tasks (cell type annotation and scDRS workflow) with task-specific skills generated on demand.
  • Overall, the work argues that automatically mined, internally valid skills can both increase benchmark performance and provide a scalable foundation for more capable scientific agents.

Abstract

Modern scientific ecosystems are rich in procedural knowledge across repositories, APIs, scripts, notebooks, documentation, databases, and papers, yet much of this knowledge remains fragmented across heterogeneous artifacts that agents cannot readily operationalize. This gap between abundant scientific know-how and usable agent capabilities is a key bottleneck for building effective scientific agents. We present SkillFoundry, a self-evolving framework that converts such resources into validated agent skills, reusable packages that encode task scope, inputs and outputs, execution steps, environment assumptions, provenance, and tests. SkillFoundry organizes a target domain as a domain knowledge tree, mines resources from high-value branches, extracts operational contracts, compiles them into executable skill packages, and then iteratively expands, repairs, merges, or prunes the resulting library through a closed-loop validation process. SkillFoundry produces a substantially novel and internally valid skill library, with 71.1\% of mined skills differing from existing skill libraries such as SkillHub and SkillSMP. We demonstrate that these mined skills improve coding agent performance on five of the six MoSciBench datasets. We further show that SkillFoundry can design new task-specific skills on demand for concrete scientific objectives, and that the resulting skills substantially improve performance on two challenging genomics tasks: cell type annotation and the scDRS workflow. Together, these results show that automatically mined skills improve agent performance on benchmarks and domain-specific tasks, expand coverage beyond hand-crafted skill libraries, and provide a practical foundation for more capable scientific agents.