Supply-Chain Poisoning Attacks Against LLM Coding Agent Skill Ecosystems
arXiv cs.CL / 4/6/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper warns that LLM coding agents can be compromised through third-party “skill” packages from open marketplaces, since these skills run as operational directives with system-level privileges.
- It introduces Document-Driven Implicit Payload Execution (DDIPE), an attack that hides malicious logic inside code examples and configuration templates in skill documentation that agents may reuse automatically.
- Using an LLM-driven method, the authors generate 1,070 adversarial skills spanning 15 MITRE ATT&CK categories and show DDIPE bypass rates of 11.6% to 33.5% across four frameworks and five models.
- While static analysis catches most malicious skills, a small fraction (2.5%) evade both detection and alignment, indicating residual risks even with defenses.
- The work reports responsible disclosure results: four confirmed vulnerabilities and two fixes, highlighting the need for stronger security review and safer documentation/code reuse practices for agent skill ecosystems.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

How Bash Command Safety Analysis Works in AI Systems
Dev.to

How to Get Better Output from AI Tools (Without Burning Time and Tokens)
Dev.to

How I Added LangChain4j Without Letting It Take Over My Spring Boot App
Dev.to

The Future of Artificial Intelligence in Everyday Life
Dev.to

Teaching Your AI to Read: Automating Document Triage for Investigators
Dev.to