Binary Spiking Neural Networks as Causal Models

arXiv cs.AI / 5/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a causal analysis framework for Binary Spiking Neural Networks (BSNNs) by formalizing their spiking dynamics as a binary causal model.
  • It shows that, once framed causally, BSNN outputs can be explained using logic-based reasoning techniques, including SAT and SMT solvers for abductive explanations.
  • The authors train a BSNN on MNIST and use SAT/SMT methods to generate feature-level (pixel-level) abductive explanations of the network’s classifications.
  • The generated explanations are compared with SHAP, and the paper claims their method—unlike SHAP—can guarantee that explanations do not include completely irrelevant features.
  • Overall, the work connects spiking neural networks with causal modeling and formal verification-style solvers to improve interpretability guarantees.

Abstract

We provide a causal analysis of Binary Spiking Neural Networks (BSNNs) to explain their behavior. We formally define a BSNN and represent its spiking activity as a binary causal model. Thanks to this causal representation, we are able to explain the output of the network by leveraging logic-based methods. In particular, we show that we can successfully use a SAT as well as a SMT solver to compute abductive explanations from this binary causal model. To illustrate our approach, we trained the BSNN on the standard MNIST dataset and applied our SAT-based and SMT-based methods to finding abductive explanations of the network's classifications based on pixel-level features. We also compared the found explanations against SHAP, a popular method used in the area of explainable AI. We show that, unlike SHAP, our approach guarantees that a found explanation does not contain completely irrelevant features.