Rhamba: Region-Aware Hybrid Attention-Mamba Framework for Self-Supervised Learning in Resting-State fMRI
arXiv cs.LG / 5/5/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- Rhamba is a new self-supervised pretraining framework for resting-state fMRI that combines anatomically guided, region-aware masking with hybrid Attention-Mamba sequence modeling.
- Pretraining on the ABIDE dataset uses region-aligned patch embeddings and three masking strategies (Any, Majority, Pure with increasing spatial specificity), and the paper evaluates multiple architectural variants including Mamba-only, alternating Mamba/Attention, and two hybrid encoder-decoder setups (AM and MA).
- In downstream fine-tuning for schizophrenia and ADHD detection (COBRE and ADHD-200), the masking strategy affected reconstruction loss in a consistent order (Any > Majority > Pure) but produced only modest, dataset-dependent differences in downstream performance.
- The MA hybrid configuration (Mamba-Attention) delivered the best average AUROC across both datasets, and region-wise attribution via Integrated Gradients showed that peak performance depends on the interaction of masking strategy and architecture.
- The authors claim Rhamba outperforms prior state-of-the-art approaches while offering a flexible trade-off among interpretability, scalability, and performance for large-scale fMRI representation learning.
Related Articles
Singapore's Fraud Frontier: Why AI Scam Detection Demands Regulatory Precision
Dev.to
How AI is Changing the Way We Code in 2026: The Shift from Syntax to Strategy
Dev.to
13 CLAUDE.md Rules That Make AI Write Modern PHP (Not PHP 5 Resurrected)
Dev.to
MCP annotations are a UX layer, not a security layer
Dev.to
From OOM to 262K Context: Running Qwen3-Coder 30B Locally on 8GB VRAM
Dev.to