Trace2Skill: Distill Trajectory-Local Lessons into Transferable Agent Skills

arXiv cs.AI / 3/27/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Trace2Skill is a new framework for distilling diverse, trajectory-level execution experience into transferable, domain-specific skills for LLM agents, addressing the scalability limits of manual authoring and the fragility of naive automated approaches.
  • The method uses a parallel fleet of sub-agents to analyze a broad pool of executions, then hierarchically consolidates extracted trajectory-specific lessons into a unified, conflict-free skill directory through inductive reasoning.
  • Trace2Skill can both deepen existing human-written skills and generate new skills from scratch, aiming to avoid overfitting to non-generalizable, trajectory-local patterns.
  • Experiments on spreadsheet, VisionQA, and math reasoning show significant improvements over strong baselines (including Anthropic’s official xlsx skills), with benefits that transfer across LLM model scales and generalize to out-of-distribution (OOD) settings.
  • The paper reports that skills evolved on Qwen3.5-35B trajectories can substantially improve a larger Qwen3.5-122B agent (up to 57.65 absolute percentage points on WikiTableQuestions) without parameter updates, external retrieval modules, or large model sizes.

Abstract

Equipping Large Language Model (LLM) agents with domain-specific skills is critical for tackling complex tasks. Yet, manual authoring creates a severe scalability bottleneck. Conversely, automated skill generation often yields fragile or fragmented results because it either relies on shallow parametric knowledge or sequentially overfits to non-generalizable trajectory-local lessons. To overcome this, we introduce Trace2Skill, a framework that mirrors how human experts author skills: by holistically analyzing broad execution experience before distilling it into a single, comprehensive guide. Instead of reacting sequentially to individual trajectories, Trace2Skill dispatches a parallel fleet of sub-agents to analyze a diverse pool of executions. It extracts trajectory-specific lessons and hierarchically consolidates them into a unified, conflict-free skill directory via inductive reasoning. Trace2Skill supports both deepening existing human-written skills and creating new ones from scratch. Experiments in challenging domains, such as spreadsheet, VisionQA and math reasoning, show that Trace2Skill significantly improves upon strong baselines, including Anthropic's official xlsx skills. Crucially, this trajectory-grounded evolution does not merely memorize task instances or model-specific quirks: evolved skills transfer across LLM scales and generalize to OOD settings. For example, skills evolved by Qwen3.5-35B on its own trajectories improved a Qwen3.5-122B agent by up to 57.65 absolute percentage points on WikiTableQuestions. Ultimately, our results demonstrate that complex agent experience can be packaged into highly transferable, declarative skills -- requiring no parameter updates, no external retrieval modules, and utilizing open-source models as small as 35B parameters.