AI Navigate

Resilience Meets Autonomy: Governing Embodied AI in Critical Infrastructure

arXiv cs.AI / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that embodied AIs used in critical infrastructure can experience cascading failures when operating under uncertain scenarios beyond their training, necessitating bounded autonomy within a hybrid governance architecture.
  • It outlines four oversight modes to govern AI capabilities and human judgment across infrastructure sectors.
  • The authors map these oversight modes to sectors based on task complexity, risk level, and consequence severity to tailor governance.
  • The framework draws on the EU AI Act, ISO safety standards, and crisis management research to justify a structured allocation of machine capability and human judgment.
  • The work reframes resilience as a property of governance design, with implications for policymakers, engineers, and operations teams.

Abstract

Critical infrastructure increasingly incorporates embodied AI for monitoring, predictive maintenance, and decision support. However, AI systems designed to handle statistically representable uncertainty struggle with cascading failures and crisis dynamics that exceed their training assumptions. This paper argues that Embodied AIs resilience depends on bounded autonomy within a hybrid governance architecture. We outline four oversight modes and map them to critical infrastructure sectors based on task complexity, risk level, and consequence severity. Drawing on the EU AI Act, ISO safety standards, and crisis management research, we argue that effective governance requires a structured allocation of machine capability and human judgement.