Scalable Learning in Structured Recurrent Spiking Neural Networks without Backpropagation

arXiv cs.AI / 5/4/2026

💬 OpinionDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a structured multi-layer recurrent spiking neural network (SNN) that uses mostly fixed long-range small-world projections for efficient deep recurrence with sparse global communication.
  • It presents a supervised learning method that avoids backpropagation and surrogate gradients by using population-based winner-take-all teaching at the output plus fixed random broadcast alignment feedback.
  • Synaptic updates are driven entirely by local plasticity mechanisms, gated by low-dimensional modulatory neuron populations and implemented via three-factor learning rules with eligibility traces.
  • The authors analyze algorithmic properties, computational complexity, and hardware feasibility, and report stable learning with competitive benchmark classification performance.
  • Overall, the work argues that combining structured recurrence with neuromodulatory, local learning rules can make scalable, hardware-friendly SNN training possible without gradient-based methods.

Abstract

Spiking Neural Networks (SNNs) provide a promising framework for energy-efficient and biologically grounded computation; however, scalable learning in deep recurrent architectures with sparse connectivity remains a major challenge. In this work, we propose a structured multi-layer recurrent SNN architecture composed of locally dense recurrent layers augmented with sparse small-world long-range projections to a readout population. The long-range connectivity is largely fixed, preserving routing efficiency and hardware scalability, while synaptic adaptation is performed using strictly local plasticity mechanisms. To enable supervised learning without backpropagation or surrogate gradients, we introduce a biologically motivated learning framework that combines: (i) population-based winner-take-all (WTA) teaching signals at the output layer, (ii) fixed random broadcast alignment feedback pathways, and (iii) low-dimensional modulatory neuron populations that gate synaptic updates through three-factor learning rules with eligibility traces. This design supports deep recurrent computation with sparse global communication and purely local synaptic updates. We analyze the algorithmic properties, computational complexity, and hardware feasibility of the proposed approach, and demonstrate stable learning and competitive performance on benchmark classification tasks. The results highlight the potential of structured recurrence and neuromodulatory learning to enable scalable, hardware-compatible SNN training beyond gradient-based methods.