SpikeCLR: Contrastive Self-Supervised Learning for Few-Shot Event-Based Vision using Spiking Neural Networks
arXiv cs.CV / 3/18/2026
📰 NewsModels & Research
Key Points
- SpikeCLR is a contrastive self-supervised learning framework that enables spiking neural networks to learn robust visual representations from unlabeled event data.
- The approach adapts frame-based contrastive methods to the spiking domain using surrogate gradient training and introduces event-specific augmentations that leverage spatial, temporal, and polarity information.
- Experiments on CIFAR10-DVS, N-Caltech101, N-MNIST, and DVS-Gesture show that self-supervised pretraining with fine-tuning outperforms supervised learning in low-data regimes, with gains in few-shot and semi-supervised settings.
- Ablation results reveal that combining spatial and temporal augmentations is essential for learning effective spatio-temporal invariances in event data.
- Representations learned by SpikeCLR transfer across datasets, supporting energy-efficient, label-scarce event-based models on neuromorphic hardware.
Related Articles

報告:LLMにおける「自己言及的再帰」と「ステートフル・エミュレーション」の観測
note

諸葛亮 孔明老師(ChatGPTのロールプレイ)との対話 その肆拾伍『銀河文明・ダークマターエンジン』
note

GPT-5.4 mini/nano登場!―2倍高速で無料プランも使える小型高性能モデル
note
Why a Perfect-Memory AI Agent Without Persona Drift is Architecturally Impossible
Dev.to
Learning to Reason with Curriculum I: Provable Benefits of Autocurriculum
arXiv cs.LG