AI Navigate

LuMamba: Latent Unified Mamba for Electrode Topology-Invariant and Efficient EEG Modeling

arXiv cs.AI / 3/20/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • LuMamba is a self-supervised EEG modeling framework that combines topology-invariant encodings with linear-complexity state-space models to address varying electrode topologies and scalability challenges.
  • It uses LUNA's learned-query cross-attention for channel unification and FEMBA's bidirectional Mamba blocks for efficient temporal modeling.
  • The work investigates the Latent-Euclidean Joint-Embedding Predictive Architecture (LeJEPA) for biosignal learning, showing that combining masked reconstruction with LeJEPA yields more robust representations across tasks and electrode configurations.
  • On 21,000 hours of unlabeled EEG from the TUEG corpus, LuMamba achieves 4.6M parameters with 80.99% balanced accuracy on TUAB and 0.97 AUPR for Alzheimer's detection, while delivering 377× fewer FLOPS and enabling 12× longer sequences; code is available at https://github.com/pulp-bio/biofoundation.

Abstract

Electroencephalography (EEG) enables non-invasive monitoring of brain activity across clinical and neurotechnology applications, yet building foundation models for EEG remains challenging due to \emph{differing electrode topologies} and \emph{computational scalability}, as Transformer architectures incur quadratic sequence complexity. As a joint solution, we propose \textbf{LuMamba} (\textbf{L}atent \textbf{U}nified \textbf{Mamba}), a self-supervised framework combining topology-invariant encodings with linear-complexity state-space modeling, using LUNA's learned-query cross-attention mechanism for channel unification~\cite{luna}, and FEMBA's bidirectional Mamba blocks for efficient temporal modeling~\cite{femba}. Within this architecture, we provide the first systematic investigation of the Latent-Euclidean Joint-Embedding Predictive Architecture (LeJEPA) for biosignal learning. Pre-trained on over 21,000 hours of unlabeled EEG from the TUEG corpus, LuMamba is evaluated on five downstream tasks spanning abnormality detection, artifact recognition, and mental condition classification across electrode configurations ranging from 16 to 26 channels. In the pre-training objective, masked reconstruction alone yields structured but less generalizable representations, while LeJEPA alone produces diffuse embeddings; combining both objectives achieves the most robust performance. With only 4.6M parameters, LuMamba attains 80.99\% balanced accuracy on TUAB and achieves state-of-art performance on Alzheimer's detection (0.97 AUPR), while requiring \textbf{377\times fewer FLOPS} than state-of-art models at equivalent sequence lengths and scaling to \textbf{12\times longer sequences} before reaching typical GPU memory limits. Code is available at https://github.com/pulp-bio/biofoundation