Reconstructing Spiking Neural Networks Using a Single Neuron with Autapses

arXiv cs.AI / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces TDA-SNN, a spiking neural network framework that reconstructs multilayer SNN behaviors using only a single leaky integrate-and-fire neuron with time-delayed autapses.
  • It uses prototype-learning-style training and internal temporal-state reorganization to emulate multiple spiking architectures, including reservoir, MLP-like, and convolution-like forms, within one unified approach.
  • Experiments across sequential, event-based, and image benchmarks show competitive results for reservoir and MLP settings, while convolutional performance reflects an explicit space–time trade-off.
  • Compared with standard SNNs, TDA-SNN targets major reductions in neuron count and state memory by increasing per-neuron information capacity, but may require additional temporal latency in extreme single-neuron configurations.
  • Overall, the work positions temporally multiplexed single-neuron models as compact brain-inspired computational units for neuromorphic computing.

Abstract

Spiking neural networks (SNNs) are promising for neuromorphic computing, but high-performing models still rely on dense multilayer architectures with substantial communication and state-storage costs. Inspired by autapses, we propose time-delayed autapse SNN (TDA-SNN), a framework that reconstructs SNNs with a single leaky integrate-and-fire neuron and a prototype-learning-based training strategy. By reorganizing internal temporal states, TDA-SNN can realize reservoir, multilayer perceptron, and convolution-like spiking architectures within a unified framework. Experiments on sequential, event-based, and image benchmarks show competitive performance in reservoir and MLP settings, while convolutional results reveal a clear space--time trade-off. Compared with standard SNNs, TDA-SNN greatly reduces neuron count and state memory while increasing per-neuron information capacity, at the cost of additional temporal latency in extreme single-neuron settings. These findings highlight the potential of temporally multiplexed single-neuron models as compact computational units for brain-inspired computing.
広告