Focus Session: Autonomous Systems Dependability in the era of AI: Design Challenges in Safety, Security, Reliability and Certification

arXiv cs.AI / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines why dependability for embedded, safety-critical autonomous systems is getting harder due to rising complexity, mixed hardware/software stacks, and AI/ML-driven components.
  • It argues that traditional safety, security, and reliability assurance methods often struggle to handle AI/ML’s dynamic, uncertain, and hard-to-formalize behavior, particularly under strict real-time, power, and safety constraints.
  • The authors emphasize a holistic assurance strategy covering multiple abstraction layers and both design-time and run-time assurance, rather than relying on single-point verification.
  • It surveys emerging methodologies, architectures, and frameworks, including advances in reliability modeling, secure system design, and certification approaches that can work with learning-enabled components that lack perfect guarantees.
  • Overall, the work aims to bridge AI innovation with system-level dependability that can be certified, by addressing verification, validation, and certification gaps caused by AI/ML uncertainty.

Abstract

The design of embedded safety-critical systems such as those used in next-generation automotive and autonomous platforms, is increasingly challenged by escalating system complexity, hardware-software heterogeneity, and the integration of intelligent, data-driven components. Ensuring dependability in such systems requires a holistic approach that spans multiple abstraction layers and encompasses both design- and run-time assurance. Traditional methods for reliability, safety, and security management often fall short in addressing the dynamic and uncertain behaviors introduced by Artificial Intelligence (AI) and Machine Learning (ML) components, especially under stringent real-time, power, and safety constraints. While AI and ML offer powerful predictive, adaptive, and self-optimizing capabilities that can enhance system dependability, their inherent non-determinism, data-dependence, and lack of formal guarantees introduce new challenges for verification, validation, and certification. This paper explores emerging methodologies, architectures, and frameworks for designing dependable autonomous and embedded systems in the era of AI. It highlight advances in reliability modeling, secure system design, and certification approaches that account for imperfect, learning-enabled components, aiming to bridge the gap between AI innovation and certifiable system-level dependability.