Adversarial Robustness of Deep State Space Models for Forecasting

arXiv cs.LG / 4/7/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies adversarial robustness for time-series forecasting using a control-theoretic framework applied to the Spacetime state-space model (SSM) forecaster, where robustness has been previously unclear.
  • It proves representational properties of the decoder-only Spacetime architecture, showing it can express the optimal Kalman predictor under an autoregressive data-generating process—something the authors claim other SSMs cannot.
  • The authors model robust forecasting as a Stackelberg (defender–attacker) game against worst-case stealthy adversaries under a detection budget, and propose adversarial training to solve it.
  • They derive closed-form bounds linking adversarial forecasting vulnerability to factors such as open-loop/closed-loop instability and the decoder state dimension, providing design principles for robustness.
  • Experiments on Monash benchmark datasets show attack methods that require no forecaster access and no gradient computation can increase forecasting error by at least 33% compared with projected gradient descent using small step sizes.

Abstract

State-space model (SSM) for time-series forecasting have demonstrated strong empirical performance on benchmark datasets, yet their robustness under adversarial perturbations is poorly understood. We address this gap through a control-theoretic lens, focusing on the recently proposed Spacetime SSM forecaster. We first establish that the decoder-only Spacetime architecture can represent the optimal Kalman predictor when the underlying data-generating process is autoregressive - a property no other SSM possesses. Building on this, we formulate robust forecaster design as a Stackelberg game against worst-case stealthy adversaries constrained by a detection budget, and solve it via adversarial training. We derive closed-form bounds on adversarial forecasting error that expose how open-loop instability, closed-loop instability, and decoder state dimension each amplify vulnerability - offering actionable principles towards robust forecaster design. Finally, we show that even adversaries with no access to the forecaster can nonetheless construct effective attacks by exploiting the model's locally linear input-output behavior, bypassing gradient computations entirely. Experiments on the Monash benchmark datasets highlight that model-free attacks, without any gradient computation, can cause at least 33% more error than projected gradient descent with a small step size.

Adversarial Robustness of Deep State Space Models for Forecasting | AI Navigate