AI Navigate

SpikeCLR: Contrastive Self-Supervised Learning for Few-Shot Event-Based Vision using Spiking Neural Networks

arXiv cs.CV / 3/18/2026

📰 NewsModels & Research

Key Points

  • SpikeCLR is a contrastive self-supervised learning framework that enables spiking neural networks to learn robust visual representations from unlabeled event data.
  • The approach adapts frame-based contrastive methods to the spiking domain using surrogate gradient training and introduces event-specific augmentations that leverage spatial, temporal, and polarity information.
  • Experiments on CIFAR10-DVS, N-Caltech101, N-MNIST, and DVS-Gesture show that self-supervised pretraining with fine-tuning outperforms supervised learning in low-data regimes, with gains in few-shot and semi-supervised settings.
  • Ablation results reveal that combining spatial and temporal augmentations is essential for learning effective spatio-temporal invariances in event data.
  • Representations learned by SpikeCLR transfer across datasets, supporting energy-efficient, label-scarce event-based models on neuromorphic hardware.

Abstract

Event-based vision sensors provide significant advantages for high-speed perception, including microsecond temporal resolution, high dynamic range, and low power consumption. When combined with Spiking Neural Networks (SNNs), they can be deployed on neuromorphic hardware, enabling energy-efficient applications on embedded systems. However, this potential is severely limited by the scarcity of large-scale labeled datasets required to effectively train such models. In this work, we introduce SpikeCLR, a contrastive self-supervised learning framework that enables SNNs to learn robust visual representations from unlabeled event data. We adapt prior frame-based methods to the spiking domain using surrogate gradient training and introduce a suite of event-specific augmentations that leverage spatial, temporal, and polarity transformations. Through extensive experiments on CIFAR10-DVS, N-Caltech101, N-MNIST, and DVS-Gesture benchmarks, we demonstrate that self-supervised pretraining with subsequent fine-tuning outperforms supervised learning in low-data regimes, achieving consistent gains in few-shot and semi-supervised settings. Our ablation studies reveal that combining spatial and temporal augmentations is critical for learning effective spatio-temporal invariances in event data. We further show that learned representations transfer across datasets, contributing to efforts for powerful event-based models in label-scarce settings.