INTARG: Informed Real-Time Adversarial Attack Generation for Time-Series Regression

arXiv cs.LG / 4/15/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses the vulnerability of deep learning time-series forecasting models to adversarial attacks and notes that many existing attack methods do not fit realistic time-series constraints.
  • It introduces INTARG, an online bounded-buffer adversarial attack framework designed for time-series regression without needing full historical storage or attacking at every time step.
  • INTARG uses an informed and selective strategy that targets only certain time steps—specifically where the model is highly confident and where expected prediction error is maximal.
  • Experiments reported in the paper show up to a 2.42x increase in prediction error while executing attacks in fewer than 10% of time steps, indicating higher attack efficiency.
  • Overall, the work contributes a more practical adversarial attack methodology for time-series forecasting under online, resource-limited settings.

Abstract

Time-series forecasting aims to predict future values by modeling temporal dependencies in historical observations. It is a critical component of many real-world systems, where accurate forecasts improve operational efficiency and help mitigate uncertainty and risk. More recently, machine learning (ML), and especially deep learning (DL)-based models, have gained widespread adoption for time-series forecasting, but they remain vulnerable to adversarial attacks. However, many state-of-the-art attack methods are not directly applicable in time-series settings, where storing complete historical data or performing attacks at every time step is often impractical. This paper proposes an adversarial attack framework for time-series forecasting under an online bounded-buffer setting, leveraging an informed and selective attack strategy. By selectively targeting time steps where the model exhibits high confidence and the expected prediction error is maximal, our framework produces fewer but substantially more effective attacks. Experiments show that our framework can increase the prediction error up to 2.42x, while performing attacks in fewer than 10% of time steps.