NeuronSpark: A Spiking Neural Network Language Model with Selective State Space Dynamics
arXiv cs.AI / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- NeuronSpark introduces a 0.9B-parameter spiking neural network language model trained with next-token prediction and surrogate gradients, without Transformer distillation.
- The model employs selective state-space spiking dynamics, leakage-current inter-layer communication, PonderNet adaptive timesteps, fused Triton PLIF kernels, and stabilization techniques such as residual centering, lateral-inhibition normalization, and natural-gradient compensation.
- With a constrained pretraining budget (~1.4B tokens) and 6.5K supervised fine-tuning steps, NeuronSpark reaches a 3.6 pretraining loss and shows early multi-turn dialogue behavior after SFT.
- The results demonstrate the feasibility of end-to-end language modeling with a pure SNN architecture at this scale, suggesting new directions for neuromorphic NLP.
Related Articles

Composer 2: What is new and Compares with Claude Opus 4.6 & GPT-5.4
Dev.to
How UCP Breaks Your E-Commerce Tracking Stack: A Platform-by-Platform Analysis
Dev.to
AI Text Analyzer vs Asking Friends: Which Gives Better Perspective?
Dev.to
[D] Cathie wood claims ai productivity wave is starting, data shows 43% of ceos save 8+ hours weekly
Reddit r/MachineLearning

Microsoft hires top AI researchers from Allen Institute for AI for Suleyman's Superintelligence team
THE DECODER