Adversarial Vulnerabilities in Neural Operator Digital Twins: Gradient-Free Attacks on Nuclear Thermal-Hydraulic Surrogates

arXiv cs.LG / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study finds that neural-operator-based digital twins for nuclear thermal-hydraulics can be driven to catastrophic prediction errors using extremely sparse, physically plausible perturbations to boundary conditions.
  • Gradient-free adversarial search (differential evolution) across four neural operator architectures increases relative L2 error from roughly 1.5% to about 37–63% while remaining undetected by standard validation and z-score anomaly detection.
  • The paper shows that vulnerability is not simply “more sensitivity equals worse”; it proposes an effective perturbation dimension (d_eff) and a two-factor vulnerability model combining sensitivity concentration and amplification.
  • Architectures with extreme sensitivity concentration (e.g., POD-DeepONet, d_eff≈1) are not necessarily the most exploitable, while moderate concentration with enough amplification (e.g., S-DeepONet, d_eff≈4) yields the highest attack success.
  • The authors argue these results reveal a structural attack surface in operator learning and imply deployment in safety-critical settings requires robustness guarantees beyond conventional validation.

Abstract

Operator learning models are rapidly emerging as the predictive core of digital twins for nuclear and energy systems, promising real-time field reconstruction from sparse sensor measurements. Yet their robustness to adversarial perturbations remains uncharacterized, a critical gap for deployment in safety-critical systems. Here we show that neural operators are acutely vulnerable to extremely sparse (fewer than 1% of inputs), physically plausible perturbations that exploit their sensitivity to boundary conditions. Using gradient-free differential evolution across four operator architectures, we demonstrate that minimal modifications trigger catastrophic prediction failures, increasing relative L_2 error from \sim1.5% (validated accuracy) to 37-63% while remaining completely undetectable by standard validation metrics. Notably, 100% of successful single-point attacks pass z-score anomaly detection. We introduce the effective perturbation dimension d_{\text{eff}}, a Jacobian-based diagnostic that, together with sensitivity magnitude, yields a two-factor vulnerability model explaining why architectures with extreme sensitivity concentration (POD-DeepONet, d_{\text{eff}} \approx 1) are not necessarily the most exploitable, since low-rank output projections cap maximum error, while moderate concentration with sufficient amplification (S-DeepONet, d_{\text{eff}} \approx 4) produces the highest attack success. Gradient-free search outperforms gradient-based alternatives (PGD) on architectures with gradient pathologies, while random perturbations of equal magnitude achieve near-zero success rates, confirming that the discovered vulnerabilities are structural. Our findings expose a previously overlooked attack surface in operator learning models and establish that these models require robustness guarantees beyond standard validation before deployment.