QB-LIF: Learnable-Scale Quantized Burst Neurons for Efficient SNNs

arXiv cs.CV / 4/29/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses a key bottleneck in binary spike coding for spiking neural networks (SNNs): using only 1 bit per timestep limits information throughput, especially in deep models with short simulation horizons.
  • It introduces the Quantized Burst-LIF (QB-LIF) neuron, which models burst spiking as a saturated uniform quantization of membrane potentials with a learnable scale parameter per layer.
  • Unlike methods that depend on fixed, predefined multi-threshold burst structures, QB-LIF adapts each layer’s spiking resolution automatically based on membrane-potential statistics.
  • For efficient hardware execution, the authors propose an “absorbable scale” approach that folds the learned quantization scale into synaptic weights at inference time while keeping an accumulate-only (AC) compute paradigm.
  • To stabilize training in the discrete, multi-level quantization space, they develop ReLSG-ET, a rectified-linear surrogate gradient with exponential tails that maintains gradient flow across burst intervals; experiments show QB-LIF outperforms binary and fixed-burst SNNs on both static and event-driven benchmarks with higher accuracy at ultra-low latency.

Abstract

Binary spike coding enables sparse and event-driven computation in spiking neural networks (SNNs), yet its 1-bit-per-timestep representation fundamentally limits information throughput. This bottleneck becomes increasingly restrictive in deep architectures under short simulation horizons. We propose the Quantized Burst-LIF (QB-LIF) neuron, which reformulates burst spiking as a saturated uniform quantization of membrane potentials with a learnable scale. Instead of relying on predefined multi-threshold structures, QB-LIF treats the quantization scale as a trainable parameter, allowing each layer to autonomously adapt its spiking resolution to the underlying membrane-potential statistics. To preserve hardware efficiency, we introduce an absorbable scale strategy that folds the learned quantized scale into synaptic weights during inference, maintaining a strict accumulate-only (AC) execution paradigm. To enable stable optimization in the discrete multi-level space, we further design ReLSG-ET, a rectified-linear surrogate gradient with exponential tails that sustains gradient flow across burst intervals. Extensive experiments on static (CIFAR-10/100, ImageNet) and event-driven (CIFAR10-DVS, DVS128-Gesture) benchmarks demonstrate that QB-LIF consistently outperforms binary and fixed-burst SNNs, achieving higher accuracy under ultra-low latency while preserving neuromorphic compatibility.

QB-LIF: Learnable-Scale Quantized Burst Neurons for Efficient SNNs | AI Navigate