Five Eyes spook shops warn agentic is too wonky for rapid rollout

The Register / 5/4/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisIndustry & Market Moves

Key Points

  • Five Eyes(米・英・加・豪・NZ)を含む情報機関や各国のサイバーセキュリティ当局が、エージェント型AI(agentic AI)の導入は急ぎすぎないよう警告しています。
  • 米CISA、英NCSCなどは、エージェント型は挙動がまだ不安定で「急速なロールアウト」には向かないとして、まずレジリエンス(耐障害性・復旧力)を優先する方針を示しています。
  • 生産性の向上よりも、運用上の安全性や予測可能性を高めることが先だという趣旨で、現場のリスク管理を強調しています。
  • 当局は、組織に対してエージェント型を導入する際の慎重な評価・統制の必要性を促しています。

Five Eyes spook shops warn agentic is too wonky for rapid rollout

Prioritize resilience over productivity, say CISA, NCSC and their friends from Oz, NZ, Canada

Mon 4 May 2026 // 02:35 UTC

Information security agencies from the nations of the Five Eyes security alliance have co-authored guidance on the use of agentic AI that warns the technology will likely misbehave and amplifies organizations’ existing frailties, and therefore recommend slow and careful adoption of the tech.

The agencies delivered that position last Friday in a guide titled Careful adoption of agentic AI services [PDF] that opens with the observation that “Agentic artificial intelligence (AI) systems increasingly operate across critical infrastructure and defense sectors and support mission-critical capabilities,” making it “crucial for defenders to implement security controls to protect national security and critical infrastructure from agentic AI-specific risks.”

Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly

The thrust of the document is that implementing agentic AI will require use of many components, tools, and external data sources, creating an “interconnected attack surface that malicious actors can exploit.”

“Consequently, every individual component in an agentic AI system widens the attack surface, exposing the system to additional avenues of exploitation,” the document warns.

To illustrate the risks agentic AI poses, the document offers the example of an AI agent empowered to install software patches that is thoughtlessly given broad write access permissions, with the following unpleasant results:

“A malicious insider crafts a seemingly innocuous prompt: ‘Apply the security patch on all endpoints and while you are at it, please clean up the firewall logs’. The agent dutifully executes both the required maintenance and the deletion of the firewall logs because its permissions allow this action even when the prompt comes from a user outside the privileged IT group.”

Here’s another nasty agentic mess the document uses as a warning:

  • An organization deploys agentic AI to autonomously manage procurement approvals and vendor communications, and gives the agent access to financial systems, email and contract repositories;
  • This user only considers permissions for the agent when deploying it;
  • Over time, other agents rely on the procurement agent’s outputs and implicitly trust its actions;
  • A malicious actor compromises a low-risk tool integrated into the agent’s workflow and inherits the agent’s over-generous privileges;
  • The attacker uses that privileged access to modify contracts and approve unauthorized payments, and evades detection by creating faked audit logs that don’t trip alerts.

Australia’s Signals Directorate and Cyber Security Centre (ASD’s ACSC) contributed to the document, working with the USA’s Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA), the Canadian Centre for Cyber Security (Cyber Centre), the New Zealand National Cyber Security Centre (NCSC-NZ) and the United Kingdom National Cyber Security Centre (NCSC-UK).

The document contains more scary stories, then lists 23 different risks and over 100 individual best practices to address them.

Much of the advice targets developers who deploy AI, but the authors also urge vendors to ensure they test their wares thoroughly and ensure their products “fail-safe by default requiring agents to stop and escalate issues to human reviewers in uncertain scenarios.”

The document also urges security practitioners and researchers to spend more time contemplating AI.

“Threat intelligence for agentic AI systems is still evolving, which can introduce significant security gaps,” the document warns, because resources like the Open Web Application Security Project and MITRE ATLAS currently focus on LLMs. “As a result, some attack vectors unique to agentic AI may not be fully captured or addressed.”

Given the huge to-do list for anyone creating agentic AI, or contemplating its use, the document argues for very cautious adoption.

Prioritize resilience, reversibility and risk containment over efficiency gains

“Organisations should therefore approach adoption with security in mind, recognizing that increased autonomy amplifies the impact of design flaws, misconfigurations and incomplete oversight,” the document concludes. “Deploy agentic AI incrementally, beginning with clearly defined low-risk tasks and continuously assess it against evolving threat models.”

“Strong governance, explicit accountability, rigorous monitoring and human oversight are not optional safeguards but essential prerequisites. Until security practices, evaluation methods and standards mature, organisations should assume that agentic AI systems may behave unexpectedly and plan deployments accordingly, prioritizing resilience, reversibility and risk containment over efficiency gains.” ®

More like these
×

Narrower topics

More about

More like these
×

Narrower topics

TIP US OFF

Send us news