Threat-Oriented Digital Twinning for Security Evaluation of Autonomous Platforms

arXiv cs.RO / 4/29/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a threat-oriented digital twinning methodology to evaluate the cybersecurity of learning-enabled autonomous platforms under realistic adversarial conditions.
  • It provides a modular open-source twin architecture with separated sensing, autonomy, and supervisory-control functions, including confidence-gated multi-modal perception and explicit command/telemetry trust boundaries.
  • The design supports runtime “hold-safe” behavior and translates threat analysis into reproducible, observable, and controllable tests for spoofing, replay, malformed-input injection, degraded sensing, and adversarial ML stress.
  • Although the implemented prototype is ground-based, the architecture is intentionally aligned with shared stack elements for UAV and space systems such as constrained onboard compute and intermittent/high-latency links.
  • Overall, the work is positioned as a reusable research scaffold for dependable and secure autonomy studies across UAV and space domains.

Abstract

Open, unclassified research on secure autonomy is constrained by limited access to operational platforms, contested communications infrastructure, and representative adversarial test conditions. This paper presents a threat-oriented digital twinning methodology for cybersecurity evaluation of learning-enabled autonomous platforms. The approach is instantiated as an open-source, modular twin of a representative autonomy stack with separated sensing, autonomy, and supervisory-control functions; confidence-gated multi-modal perception; explicit command and telemetry trust boundaries; and runtime hold-safe behavior. The contribution is methodological: a reproducible design pattern that translates threat analysis into observable, controllable tests for spoofing, replay, malformed-input injection, degraded sensing, and adversarial ML stress. Although the implemented proxy is ground based, the architecture is intentionally framed around stack elements shared with UAV and space systems, including constrained onboard compute, intermittent or high-latency links, probabilistic perception, and mission-critical recovery behavior. The result is an implementable research scaffold for dependable and secure autonomy studies across UAV and space domains.