Adversarial Robustness of Time-Series Classification for Crystal Collimator Alignment

arXiv cs.LG / 4/9/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how to improve adversarial robustness for a CNN used at CERN’s LHC to classify beam-loss monitor (BLM) time-series data during crystal rotation to support crystal collimator alignment.
  • It formalizes a local robustness property under an adversarial threat model grounded in real-world plausibility, and adapts established transformation/semantic perturbation robustness patterns to the deployed time-series pipeline.
  • To match the deployed preprocessing, the authors implement a preprocessing-aware differentiable wrapper that captures normalization, padding constraints, and structured perturbations so existing gradient-based robustness tools can be applied end-to-end.
  • Because data-dependent preprocessing (e.g., per-window z-normalization) introduces nonlinearities that complicate formal verification, the work emphasizes attack-based robustness estimates validated with Foolbox and ART rather than full formal proofs.
  • Adversarial fine-tuning improves robust accuracy by up to 18.6% without hurting clean accuracy, and the paper further extends from window-level robustness to sequence-level robustness, using adversarial sequences as counterexamples to temporal robustness assumptions.

Abstract

In this paper, we analyze and improve the adversarial robustness of a convolutional neural network (CNN) that assists crystal-collimator alignment at CERN's Large Hadron Collider (LHC) by classifying a beam-loss monitor (BLM) time series during crystal rotation. We formalize a local robustness property for this classifier under an adversarial threat model based on real-world plausibility. Building on established parameterized input-transformation patterns used for transformation- and semantic-perturbation robustness, we instantiate a preprocessing-aware wrapper for our deployed time-series pipeline: we encode time-series normalization, padding constraints, and structured perturbations as a lightweight differentiable wrapper in front of the CNN, so that existing gradient-based robustness frameworks can operate on the deployed pipeline. For formal verification, data-dependent preprocessing such as per-window z-normalization introduces nonlinear operators that require verifier-specific abstractions. We therefore focus on attack-based robustness estimates and pipeline-checked validity by benchmarking robustness with the frameworks Foolbox and ART. Adversarial fine-tuning of the resulting CNN improves robust accuracy by up to 18.6 % without degrading clean accuracy. Finally, we extend robustness on time-series data beyond single windows to sequence-level robustness for sliding-window classification, introduce adversarial sequences as counterexamples to a temporal robustness requirement over full scans, and observe attack-induced misclassifications that persist across adjacent windows.