MambaSL: Exploring Single-Layer Mamba for Time Series Classification

arXiv cs.LG / 4/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces MambaSL, a framework that minimally modifies a single-layer Mamba by redesigning selective state space model components and projection layers specifically for time series classification (TSC).
  • It is motivated by four TSC-focused hypotheses and targets the gap that, despite Mamba’s success in many sequence tasks, its standalone effectiveness for TSC had not been thoroughly studied.
  • To fix benchmarking shortcomings, the authors re-evaluate 20 strong baseline methods across all 30 UEA time series datasets using a unified, more comprehensive protocol.
  • The study reports state-of-the-art TSC performance for MambaSL with statistically significant average gains compared to re-evaluated baselines.
  • Reproducibility is emphasized through public checkpoints for all evaluated models, with additional visualizations supporting the claim that Mamba-based models can serve as a TSC backbone.

Abstract

Despite recent advances in state space models (SSMs) such as Mamba across various sequence domains, research on their standalone capacity for time series classification (TSC) has remained limited. We propose MambaSL, a framework that minimally redesigns the selective SSM and projection layers of a single-layer Mamba, guided by four TSC-specific hypotheses. To address benchmarking limitations -- restricted configurations, partial University of East Anglia (UEA) dataset coverage, and insufficiently reproducible setups -- we re-evaluate 20 strong baselines across all 30 UEA datasets under a unified protocol. As a result, MambaSL achieves state-of-the-art performance with statistically significant average improvements, while ensuring reproducibility via public checkpoints for all evaluated models. Together with visualizations, these results demonstrate the potential of Mamba-based architectures as a TSC backbone.