Environment-Grounded Multi-Agent Workflow for Autonomous Penetration Testing

arXiv cs.RO / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes an environment-grounded multi-agent workflow that uses large language models to automate penetration testing in robotics-based cyber-physical systems.
  • It dynamically builds a shared graph memory during execution to capture observable system state such as network topology, communication channels, vulnerabilities, and attempted exploits.
  • The architecture is designed to keep structured automation while preserving traceability and effective context management for human oversight.
  • In a ROS/ROS2 robotics Capture-the-Flag setting, the system completed the challenge in 100% of test runs (n=5), outperforming prior literature benchmarks.
  • The authors position the approach as aligning with oversight and governance expectations referenced by frameworks like the EU AI Act.

Abstract

The increasing complexity and interconnectivity of digital infrastructures make scalable and reliable security assessment methods essential. Robotic systems represent a particularly important class of operational technology, as modern robots are highly networked cyber-physical systems deployed in domains such as industrial automation, logistics, and autonomous services. This paper explores the use of large language models for automated penetration testing in robotic environments. We propose an environment-grounded multi-agent architecture tailored to Robotics-based systems. The approach dynamically constructs a shared graph-based memory during execution that captures the observable system state, including network topology, communication channels, vulnerabilities, and attempted exploits. This enables structured automation while maintaining traceability and effective context management throughout the testing process. Evaluated across multiple iterations within a specialized robotics Capture-the-Flag scenario (ROS/ROS2), the system demonstrated high reliability, successfully completing the challenge in 100\% of test runs (n=5). This performance significantly exceeds literature benchmarks while maintaining the traceability and human oversight required by frameworks like the EU AI Act.