Clawed and Dangerous: Can We Trust Open Agentic Systems?
arXiv cs.AI / 3/30/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that open agentic systems (LLM planning + tools + persistent memory + delegated execution) pose a security problem fundamentally different from traditional software due to probabilistic runtime decisions and uncertain environments.
- It presents a six-dimensional taxonomy for analyzing open agentic system security and synthesizes 50 papers across attacks, benchmarks, defenses, audits, and related engineering foundations.
- It introduces a secure-by-construction “reference doctrine” and an evaluation scorecard to assess the security posture of agent platforms.
- The review finds current research is comparatively mature for attack characterization and benchmark building, but weaker on deployment controls, operational governance, persistent-memory integrity, and reliable capability revocation.
- It concludes with a concrete engineering agenda aimed at building agent ecosystems that remain governable, auditable, and resilient even under compromise.
Related Articles

Black Hat Asia
AI Business

Show HN: 1-Bit Bonsai, the First Commercially Viable 1-Bit LLMs
Dev.to

I Built an AI Agent That Can Write Its Own Tools When It Gets Stuck
Dev.to

How to Create AI Videos in 20 Minutes (3 Free Tools, Zero Experience)
Dev.to

Agent Self-Discovery: How AI Agents Find Their Own Wallets
Dev.to