SkillForge: Forging Domain-Specific, Self-Evolving Agent Skills in Cloud Technical Support

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces SkillForge, an LLM agent skill framework aimed at creating domain-specific skills for enterprise cloud technical support where existing skill creation methods lack grounding in real task requirements.
  • SkillForge uses a Domain-Contextualized Skill Creator to synthesize initial skills from knowledge bases and historical support tickets, improving alignment with expert-authored reference responses.
  • To avoid stagnant skill quality after deployment, it implements a self-evolving, closed loop that analyzes execution failures, diagnoses which skill components are deficient, and rewrites the skills to address those gaps.
  • The iterative three-stage pipeline (Failure Analyzer → Skill Diagnostician → Skill Optimizer) is designed to run in batches using accumulated operational evidence, enabling continuous refinement.
  • Experiments on five real-world cloud support scenarios covering 1,883 tickets and 3,737 tasks show both better initial skills than generic creators and progressive improvement across multiple starting skill types over successive evolution rounds.

Abstract

Deploying LLM-powered agents in enterprise scenarios such as cloud technical support demands high-quality, domain-specific skills. However, existing skill creators lack domain grounding, producing skills poorly aligned with real-world task requirements. Moreover, once deployed, there is no systematic mechanism to trace execution failures back to skill deficiencies and drive targeted refinements, leaving skill quality stagnant despite accumulating operational evidence. We introduce SkillForge, a self-evolving framework that closes an end-to-end creation-evaluation-refinement loop. To produce well-aligned initial skills, a Domain-Contextualized Skill Creator grounds skill synthesis in knowledge bases and historical support tickets. To enable continuous self-optimization, a three-stage pipeline -- Failure Analyzer, Skill Diagnostician, and Skill Optimizer -- automatically diagnoses execution failures in batch, pinpoints the underlying skill deficiencies, and rewrites the skill to eliminate them. This cycle runs iteratively, allowing skills to self-improve with every round of deployment feedback. Evaluated on five real-world cloud support scenarios spanning 1,883 tickets and 3,737 tasks, experiments show that: (1) the Domain-Contextualized Skill Creator produces substantially better initial skills than the generic skill creator, as measured by consistency with expert-authored reference responses from historical tickets; and (2) the self-evolution loop progressively improves skill quality from diverse starting points (including expert-authored, domain-created, and generic skills) across successive rounds, demonstrating that automated evolution can surpass manually curated expert knowledge.