Focus Session: Autonomous Systems Dependability in the era of AI: Design Challenges in Safety, Security, Reliability and Certification
arXiv cs.AI / 5/1/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines why dependability for embedded, safety-critical autonomous systems is getting harder due to rising complexity, mixed hardware/software stacks, and AI/ML-driven components.
- It argues that traditional safety, security, and reliability assurance methods often struggle to handle AI/ML’s dynamic, uncertain, and hard-to-formalize behavior, particularly under strict real-time, power, and safety constraints.
- The authors emphasize a holistic assurance strategy covering multiple abstraction layers and both design-time and run-time assurance, rather than relying on single-point verification.
- It surveys emerging methodologies, architectures, and frameworks, including advances in reliability modeling, secure system design, and certification approaches that can work with learning-enabled components that lack perfect guarantees.
- Overall, the work aims to bridge AI innovation with system-level dependability that can be certified, by addressing verification, validation, and certification gaps caused by AI/ML uncertainty.
Related Articles

Every handle invocation on BizNode gets a WFID — a universal transaction reference for accountability. Full audit trail,...
Dev.to

I deployed AI agents across AWS, GCP, and Azure without a VPN. Here is how it works.
Dev.to

Panduan Lengkap TestSprite MCP Server — Dokumentasi Getting Started dalam Bahasa Indonesia
Dev.to
AI made learning fun again
Dev.to

MCP, Skills, AI Agents, and New Models: The New Stack for Software Development
Dev.to