Congestion-Aware Dynamic Axonal Delay for Spiking Neural Networks

arXiv cs.LG / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a “Congestion-Aware Dynamic Axonal Delay” method for spiking neural networks that adapts delays based on synaptic activity rather than using static, per-synapse delays.
  • The approach decomposes delay into a channel-wise static base delay (for temporal structure) and a global, activity-conditioned shift that regulates state update rate under varying spike intensities.
  • Delay parameters are learned end-to-end via differentiable linear interpolation and discretized during inference, aiming to keep accuracy benefits with minimal extra computational cost.
  • Experiments on speech event-driven benchmarks (SHD, SSC, and GSC-35) show notable accuracy improvements, reaching 93.75% on SHD, 80.49% on SSC, and 95.53% on GSC-35.
  • The method also reduces the number of delay parameters by about 50% versus existing state-of-the-art delay-based approaches with the same network architecture.

Abstract

Spiking Neural Networks (SNNs) are widely regarded as an energy-efficient paradigm for modeling and processing temporal and event-driven information. Incorporating delays in SNNs has been proven to be an effective mechanism for improving spike alignment in event-driven tasks. However, existing delay learning approaches predominantly assign static delays to individual synapses, resulting in a large number of delay parameters and limited adaptability to input-dependent activity dynamics. To this end, we propose a Congestion-Aware Dynamic Axonal Delay mechanism, decomposing the delay into a channel-wise static base delay for temporal structuring and a global, activity-conditioned shift that dynamically regulates the state update rate under varying spike intensities. The delay parameters are learned using differentiable linear interpolation and discretized at inference time, preserving the benefits of our dynamic delay while incurring only minimal additional cost. Experiments on speech benchmarks, including the Spiking Heidelberg Dataset, Spiking Speech Commands, and Google Speech Commands, demonstrate that introducing congestion-aware delays into synaptic signal transmission effectively improves accuracy on temporal tasks, notably achieving 93.75\% accuracy on SHD, 80.49\% accuracy on SSC, and 95.53\% on GSC-35, while reducing the parameter count by approximately 50\% compared to state-of-the-art delay-based methods with the same architecture.