Building Better Environments for Autonomous Cyber Defence

arXiv cs.AI / 4/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper compiles expert knowledge from a November 2025 workshop on what constitutes a strong reinforcement learning (RL) environment for autonomous cyber defence (ACD).
  • It addresses gaps in the existing RL-for-ACD literature by focusing on practical tradecraft, domain knowledge, and recurring hazards when building RL training/evaluation setups for network defence.
  • The authors propose a framework for decomposing the interface between RL cyber environments and real-world systems, aiming to improve realism and integration.
  • It also provides guidelines and best practices for developing RL-based ACD environments and evaluating RL agents, with attention to government and critical infrastructure network scenarios.

Abstract

In November 2025, the authors ran a workshop on the topic of what makes a good reinforcement learning (RL) environment for autonomous cyber defence (ACD). This paper details the knowledge shared by participants both during the workshop and shortly afterwards by contributing herein. The workshop participants come from academia, industry, and government, and have extensive hands-on experience designing and working with RL and cyber environments. While there is now a sizeable body of literature describing work in RL for ACD, there is nevertheless a great deal of tradecraft, domain knowledge, and common hazards which are not detailed comprehensively in a single resource. With a specific focus on building better environments to train and evaluate autonomous RL agents in network defence scenarios, including government and critical infrastructure networks, the contributions of this work are twofold: (1) a framework for decomposing the interface between RL cyber environments and real systems, and (2) guidelines on current best practice for RL-based ACD environment development and agent evaluation, based on the key findings from our workshop.