AI Navigate

Chemical Reaction Networks Learn Better than Spiking Neural Networks

arXiv cs.LG / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors prove, using deterministic mass-action kinetics, that chemical reaction networks without hidden layers can learn classification tasks that previously required spiking neural networks with hidden layers.
  • They provide analytical regret bounds, analyze the network's asymptotic behavior, and study its Vapnik-Chervonenkis (VC) dimension.
  • In numerical experiments, the proposed chemical reaction network classifies handwritten digits and can outperform a spiking neural network with hidden layers in accuracy and efficiency.
  • The work motivates machine learning in chemical computers and offers a mathematical explanation for potentially more efficient learning in biochemical networks compared to neuronal networks.

Abstract

We mathematically prove that chemical reaction networks without hidden layers can solve tasks for which spiking neural networks require hidden layers. Our proof uses the deterministic mass-action kinetics formulation of chemical reaction networks. Specifically, we prove that a certain reaction network without hidden layers can learn a classification task previously proved to be achievable by a spiking neural network with hidden layers. We provide analytical regret bounds for the global behavior of the network and analyze its asymptotic behavior and Vapnik-Chervonenkis dimension. In a numerical experiment, we confirm the learning capacity of the proposed chemical reaction network for classifying handwritten digits in pixel images, and we show that it solves the task more accurately and efficiently than a spiking neural network with hidden layers. This provides a motivation for machine learning in chemical computers and a mathematical explanation for how biological cells might exhibit more efficient learning behavior within biochemical reaction networks than neuronal networks.