On the explainability of max-plus neural networks

arXiv cs.CV / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes explainability characteristics of recently proposed linear–min–max neural networks, showing how they can be interpreted at initialization as k-medoids using the infinity norm as a distance metric.
  • Training is performed with subgradient descent to improve data fit, while the authors emphasize that the model’s decision process remains traceable via the single most activated neuron driving the output.
  • They introduce a “pixel fragility” measure to assess whether a classification change can be caused by alterations to a single input pixel.
  • Experiments on the PneumoniaMNIST dataset indicate that the proposed explanation method performs favorably compared with SHAP and Integrated Gradients.
  • Overall, the work connects a specific network structure to practical, pixel-level interpretability and compares it against established attribution techniques.

Abstract

We investigate the explanability properties of the recently proposed linear-min-max neural networks. At initialization, they can be interpreted as k-medoids with the infinity norm as a distance. Then, they are trained using subgradient descent to better fit the data. The model has been shown to be a universal approximator. Yet, we can trace the decision process because a single most activated neuron is responsible for the value of the output. Using this property, we designed a pixel fragility measure that determines whether changes to a single pixel may be responsible to a change in the classification output. Experiments on the PneumoniaMnist dataset show that this explanation for the output of the neural network compares favorably to SHAP and Integrated Gradient.