Adversarial Robustness of Deep State Space Models for Forecasting
arXiv cs.LG / 4/7/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies adversarial robustness for time-series forecasting using a control-theoretic framework applied to the Spacetime state-space model (SSM) forecaster, where robustness has been previously unclear.
- It proves representational properties of the decoder-only Spacetime architecture, showing it can express the optimal Kalman predictor under an autoregressive data-generating process—something the authors claim other SSMs cannot.
- The authors model robust forecasting as a Stackelberg (defender–attacker) game against worst-case stealthy adversaries under a detection budget, and propose adversarial training to solve it.
- They derive closed-form bounds linking adversarial forecasting vulnerability to factors such as open-loop/closed-loop instability and the decoder state dimension, providing design principles for robustness.
- Experiments on Monash benchmark datasets show attack methods that require no forecaster access and no gradient computation can increase forecasting error by at least 33% compared with projected gradient descent using small step sizes.
Related Articles

Can You Really Trust AI Anonymizers? Governments Are Changing the Rules
Dev.to

AI-Powered DeFi Analysis: Using Claude with Live On-Chain Data for Protocol Research
Dev.to

AI Agents Don’t Need Bigger Context Windows. They Need Real Memory
Dev.to
[D] Is ACL more about the benchmarks now?
Reddit r/MachineLearning

Vector Databases and RAG: Semantic Search, pgvector, and Answering Questions from Your Data
Dev.to