OmniMouse: Scaling properties of multi-modal, multi-task Brain Models on 150B Neural Tokens
arXiv cs.AI / 4/22/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The OmniMouse study trains multi-modal, multi-task “brain models” on a large neural dataset (3.1M neurons from 73 mice, 323 sessions) totaling over 150B neural tokens recorded during natural and controlled stimuli plus behavior.
- The models can flexibly perform three tasks at test time—neural prediction, behavioral decoding, and neural forecasting—individually or in combination.
- OmniMouse achieves state-of-the-art results, outperforming specialized baselines across nearly all evaluation regimes for these neural modeling tasks.
- The paper finds scaling is reliable with more data, but performance gains from larger model size saturate, suggesting brain modeling may be data-limited even with very large recordings.
- The authors propose that the observed scaling behavior could indicate phase transitions in neural modeling, where richer and larger datasets might produce qualitatively new capabilities akin to emergent effects in large language models.
![AI TikTok Marketing for Pet Brands [2026 Guide]](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Fj35r9qm34d68qf2gq7no.png&w=3840&q=75)


