Self-Supervised Foundation Model for Calcium-imaging Population Dynamics
arXiv cs.AI / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces CalM, a self-supervised neural foundation model trained only on neuronal calcium traces to support multiple neuroscience objectives with better transferability than task-specific methods.
- CalM uses a high-performance tokenizer that converts single-neuron traces into a shared discrete vocabulary, along with a dual-axis autoregressive transformer that models dependencies across both neural and time dimensions.
- Experiments on a large-scale, multi-animal, multi-session calcium imaging dataset show that CalM improves neural population dynamics forecasting over strong specialized baselines after pretraining.
- With a task-specific head, CalM also adapts effectively to behavior decoding, outperforming supervised decoding models.
- Representation analysis indicates that CalM learns interpretable functional structures, suggesting value beyond just predictive performance, and the authors note that code will be released soon.
Related Articles

Black Hat Asia
AI Business
[N] Just found out that Milla Jovovich is a dev, invested in AI, and just open sourced a project
Reddit r/MachineLearning

ALTK‑Evolve: On‑the‑Job Learning for AI Agents
Hugging Face Blog

Context Windows Are Getting Absurd — And That's a Good Thing
Dev.to

Every AI Agent Registry in 2026, Compared
Dev.to