Clawed and Dangerous: Can We Trust Open Agentic Systems?

arXiv cs.AI / 3/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that open agentic systems (LLM planning + tools + persistent memory + delegated execution) pose a security problem fundamentally different from traditional software due to probabilistic runtime decisions and uncertain environments.
  • It presents a six-dimensional taxonomy for analyzing open agentic system security and synthesizes 50 papers across attacks, benchmarks, defenses, audits, and related engineering foundations.
  • It introduces a secure-by-construction “reference doctrine” and an evaluation scorecard to assess the security posture of agent platforms.
  • The review finds current research is comparatively mature for attack characterization and benchmark building, but weaker on deployment controls, operational governance, persistent-memory integrity, and reliable capability revocation.
  • It concludes with a concrete engineering agenda aimed at building agent ecosystems that remain governable, auditable, and resilient even under compromise.

Abstract

Open agentic systems combine LLM-based planning with external capabilities, persistent memory, and privileged execution. They are used in coding assistants, browser copilots, and enterprise automation. OpenClaw is a visible instance of this broader class. Without much attention yet, their security challenge is fundamentally different from that of traditional software that relies on predictable execution and well-defined control flow. In open agentic systems, everything is ''probabilistic'': plans are generated at runtime, key decisions may be shaped by untrusted natural-language inputs and tool outputs, execution unfolds in uncertain environments, and actions are taken under authority delegated by human users. The central challenge is therefore not merely robustness against individual attacks, but the governance of agentic behavior under persistent uncertainty. This paper systematizes the area through a software engineering lens. We introduce a six-dimensional analytical taxonomy and synthesize 50 papers spanning attacks, benchmarks, defenses, audits, and adjacent engineering foundations. From this synthesis, we derive a reference doctrine for secure-by-construction agent platforms, together with an evaluation scorecard for assessing platform security posture. Our review shows that the literature is relatively mature in attack characterization and benchmark construction, but remains weak in deployment controls, operational governance, persistent-memory integrity, and capability revocation. These gaps define a concrete engineering agenda for building agent ecosystems that are governable, auditable, and resilient under compromise.