MSNet and LS-Net: Scalable Multi-Scale Multi-Representation Networks for Time Series Classification

arXiv cs.LG / 3/23/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper proposes MSNet, a hierarchical multi-scale convolutional network optimized for robustness and probabilistic calibration in univariate time series classification, and LS-Net, a lightweight variant designed for efficiency-aware deployment.
  • It adapts LiteMV to operate on multi-representation univariate signals, enabling cross-representation interaction and richer feature fusion.
  • Across 142 benchmark datasets, LiteMV achieves the highest mean accuracy, MSNet yields the best probabilistic calibration (lowest NLL), and LS-Net offers the best efficiency-accuracy tradeoff.
  • Pareto analysis indicates that multi-representation multi-scale modeling provides a flexible design space for accuracy-focused, calibration-focused, or resource-constrained settings.
  • The authors provide a reference implementation at the linked GitHub repository.

Abstract

Time series classification (TSC) performance depends not only on architectural design but also on the diversity of input representations. In this work, we propose a scalable multi-scale convolutional framework that systematically integrates structured multi-representation inputs for univariate time series. We introduce two architectures: MSNet, a hierarchical multi-scale convolutional network optimized for robustness and calibration, and LS-Net, a lightweight variant designed for efficiency-aware deployment. In addition, we adapt LiteMV -- originally developed for multivariate inputs -- to operate on multi-representation univariate signals, enabling cross-representation interaction. We evaluate all models across 142 benchmark datasets under a unified experimental protocol. Critical Difference analysis confirms statistically significant performance differences among the top models. Results show that LiteMV achieves the highest mean accuracy, MSNet provides superior probabilistic calibration (lowest NLL), and LS-Net offers the best efficiency-accuracy tradeoff. Pareto analysis further demonstrates that multi-representation multi-scale modeling yields a flexible design space that can be tuned for accuracy-oriented, calibration-oriented, or resource-constrained settings. These findings establish scalable multi-representation multi-scale learning as a principled and practical direction for modern TSC. Reference implementation of MSNet and LS-Net is available at: https://github.com/alagoz/msnet-lsnet-tsc