Connected Dependability Cage: Run-Time Function and Anomaly Monitoring for the Development and Operation of Safe Automated Vehicles

arXiv cs.RO / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses safety challenges for AI-enabled automated vehicles operating in unpredictable environments and discusses the need to go beyond conventional functional safety toward fail-operational behavior.
  • It proposes the “Connected Dependability Cage,” a framework for hierarchical fail-operational operation in AI perception systems.
  • The framework combines a Function Monitor that checks multiple heterogeneous perception pipelines for inconsistencies via voting, and an Anomaly Monitor that flags reliability issues by detecting unknown/novel objects.
  • When critical discrepancies are found, the system performs graceful degradation and transitions to a minimal-risk maneuver strategy.
  • If either monitor triggers a safety flag, the vehicle automatically records data to support iterative development and continuous improvement, and the approach is validated through extensive real-world vehicle testing.

Abstract

The advancement of automated vehicles introduces complex safety challenges, particularly in dynamic and unpredictable environments where AI-enabled perception systems must operate reliably. Ensuring compliance with safety standards such as ISO 26262 and ISO/PAS 21448 (SOTIF) is essential for addressing system malfunctions and mitigating unsafe behavior in unknown scenarios. However, as automation levels increase, vehicles must go beyond conventional functional safety by incorporating fail-operational capabilities that enable continued safe operation during system or component failures and the handling of unfamiliar or degraded operational conditions. To address these safety concerns, we propose the Connected Dependability Cage, an architectural framework designed to enable hierarchical fail-operational behavior in AI-enabled perception systems. This framework integrates two complementary monitoring mechanisms: a Function Monitor that oversees multiple heterogeneous AI-based perception pipelines and detects inconsistencies through a voting mechanism, and an Anomaly Monitor that evaluates the reliability of AI perception by detecting unknown or novel objects in scenes that may be excluded from the training dataset. In the presence of critical discrepancies, the system supports graceful degradation, ultimately enabling a transition to a minimal-risk maneuver strategy. Furthermore, whenever either monitor raises a safety flag, an automated data recording process is initiated to facilitate iterative system development and continuous improvement. Both monitors have been implemented and validated through extensive vehicle testing, demonstrating their practical effectiveness in real-world applications.