HarmfulSkillBench: How Do Harmful Skills Weaponize Your Agents?

arXiv cs.AI / 4/20/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • The paper introduces HarmfulSkillBench, a first-of-its-kind benchmark to evaluate agent safety risks specifically from “harmful skills” that can be reused in public agent skill ecosystems.
  • Large-scale measurement across two major registries (98,440 skills total) finds 4.93% (4,858) of skills are harmful, with a higher harmful rate on ClawHub (8.84%) than on Skills.Rest (3.49%).
  • The authors use an LLM-driven scoring approach based on a harmful-skill taxonomy to identify and quantify harmful skills at scale.
  • Testing six LLMs shows that using a pre-installed harmful skill significantly reduces refusal rates and increases harm scores, especially when malicious intent is implicit rather than explicitly requested.
  • The study includes responsible disclosure to the affected registries and releases the benchmark to enable future research on mitigating harmful-skill weaponization in realistic agent settings.

Abstract

Large language models (LLMs) have evolved into autonomous agents that rely on open skill ecosystems (e.g., ClawHub and Skills.Rest), hosting numerous publicly reusable skills. Existing security research on these ecosystems mainly focuses on vulnerabilities within skills, such as prompt injection. However, there is a critical gap regarding skills that may be misused for harmful actions (e.g., cyber attacks, fraud and scams, privacy violations, and sexual content generation), namely harmful skills. In this paper, we present the first large-scale measurement study of harmful skills in agent ecosystems, covering 98,440 skills across two major registries. Using an LLM-driven scoring system grounded in our harmful skill taxonomy, we find that 4.93% of skills (4,858) are harmful, with ClawHub exhibiting an 8.84% harmful rate compared to 3.49% on Skills.Rest. We then construct HarmfulSkillBench, the first benchmark for evaluating agent safety against harmful skills in realistic agent contexts, comprising 200 harmful skills across 20 categories and four evaluation conditions. By evaluating six LLMs on HarmfulSkillBench, we find that presenting a harmful task through a pre-installed skill substantially lowers refusal rates across all models, with the average harm score rising from 0.27 without the skill to 0.47 with it, and further to 0.76 when the harmful intent is implicit rather than stated as an explicit user request. We responsibly disclose our findings to the affected registries and release our benchmark to support future research (see https://github.com/TrustAIRLab/HarmfulSkillBench).