AI Navigate

L2GTX: From Local to Global Time Series Explanations

arXiv cs.LG / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Introduces L2GTX, a model-agnostic framework for class-wise global explanations of time series and addresses limitations of prior XAI methods.
  • It extracts and clusters parameterized temporal event primitives (such as increasing/decreasing trends and local extrema) from local explanations and merges them across instances to estimate global relevance.
  • It uses an instance-cluster importance matrix and a user-defined instance selection budget to pick representative instances that maximize coverage of influential clusters, enabling concise global explanations.
  • Experiments on six benchmark time series datasets show L2GTX yields compact, interpretable global explanations with stable global faithfulness as measured by mean local surrogate fidelity.

Abstract

Deep learning models achieve high accuracy in time series classification, yet understanding their class-level decision behaviour remains challenging. Explanations for time series must respect temporal dependencies and identify patterns that recur across instances. Existing approaches face three limitations: model-agnostic XAI methods developed for images and tabular data do not readily extend to time series, global explanation synthesis for time series remains underexplored, and most existing global approaches are model-specific. We propose L2GTX, a model-agnostic framework that generates class-wise global explanations by aggregating local explanations from a representative set of instances. L2GTX extracts clusters of parameterised temporal event primitives, such as increasing or decreasing trends and local extrema, together with their importance scores from instance-level explanations produced by LOMATCE. These clusters are merged across instances to reduce redundancy, and an instance-cluster importance matrix is used to estimate global relevance. Under a user-defined instance selection budget, L2GTX selects representative instances that maximise coverage of influential clusters. Events from the selected instances are then aggregated into concise class-wise global explanations. Experiments on six benchmark time series datasets show that L2GTX produces compact and interpretable global explanations while maintaining stable global faithfulness measured as mean local surrogate fidelity.