Self-Supervised Foundation Model for Calcium-imaging Population Dynamics
arXiv cs.AI / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces CalM, a self-supervised neural foundation model trained only on neuronal calcium traces to support multiple neuroscience objectives with better transferability than task-specific methods.
- CalM uses a high-performance tokenizer that converts single-neuron traces into a shared discrete vocabulary, along with a dual-axis autoregressive transformer that models dependencies across both neural and time dimensions.
- Experiments on a large-scale, multi-animal, multi-session calcium imaging dataset show that CalM improves neural population dynamics forecasting over strong specialized baselines after pretraining.
- With a task-specific head, CalM also adapts effectively to behavior decoding, outperforming supervised decoding models.
- Representation analysis indicates that CalM learns interpretable functional structures, suggesting value beyond just predictive performance, and the authors note that code will be released soon.

