From Arithmetic to Logic: The Resilience of Logic and Lookup-Based Neural Networks Under Parameter Bit-Flips

arXiv cs.LG / 3/25/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies neural network robustness to hardware-induced parameter bit-flips by modeling resilience as an architectural/structural property rather than a dataset- or training-specific artifact.
  • It derives expected MSE under independent bit-flip corruption across multiple numerical formats and layer primitives, finding that lower precision, higher sparsity, bounded activations, and shallow depth generally improve fault tolerance.
  • The authors argue and support experimentally that logic and lookup-table (LUT)-based neural networks approach the combined “best-case” of these design trends for accuracy-versus-resilience trade-offs.
  • Ablation experiments on the MLPerf Tiny benchmark suite show that LUT-based models remain stable in corruption regimes where standard floating-point networks degrade sharply.
  • The work also identifies an “even-layer recovery” effect unique to logic-based architectures and characterizes the structural conditions that enable it.

Abstract

The deployment of deep neural networks (DNNs) in safety-critical edge environments necessitates robustness against hardware-induced bit-flip errors. While empirical studies indicate that reducing numerical precision can improve fault tolerance, the theoretical basis of this phenomenon remains underexplored. In this work, we study resilience as a structural property of neural architectures rather than solely as a property of a dataset-specific trained solution. By deriving the expected squared error (MSE) under independent parameter bit flips across multiple numerical formats and layer primitives, we show that lower precision, higher sparsity, bounded activations, and shallow depth are consistently favored under this corruption model. We then argue that logic and lookup-based neural networks realize the joint limit of these design trends. Through ablation studies on the MLPerf Tiny benchmark suite, we show that the observed empirical trends are consistent with the theoretical predictions, and that LUT-based models remain highly stable in corruption regimes where standard floating-point models fail sharply. Furthermore, we identify a novel even-layer recovery effect unique to logic-based architectures and analyze the structural conditions under which it emerges. Overall, our results suggest that shifting from continuous arithmetic weights to discrete Boolean lookups can provide a favorable accuracy-resilience trade-off for hardware fault tolerance.

From Arithmetic to Logic: The Resilience of Logic and Lookup-Based Neural Networks Under Parameter Bit-Flips | AI Navigate