Enhancing LIME using Neural Decision Trees

arXiv cs.LG / 2026/3/24

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper addresses a key limitation of LIME on tabular data: traditional surrogate models like linear regression or decision trees may not faithfully represent complex, non-linear decision boundaries from black-box models.
  • It proposes NDT-LIME, which replaces standard surrogates with Neural Decision Trees (NDTs) to better capture hierarchical, structured non-linearities in the local region around each prediction.
  • The authors argue that NDTs can yield local explanations that are both more accurate and more meaningful by improving explanation fidelity.
  • Experiments on multiple benchmark tabular datasets show consistent improvements in explanation fidelity compared with traditional LIME surrogate approaches.

Abstract

Interpreting complex machine learning models is a critical challenge, especially for tabular data where model transparency is paramount. Local Interpretable Model-Agnostic Explanations (LIME) has been a very popular framework for interpretable machine learning, also inspiring many extensions. While traditional surrogate models used in LIME variants (e.g. linear regression and decision trees) offer a degree of stability, they can struggle to faithfully capture the complex non-linear decision boundaries that are inherent in many sophisticated black-box models. This work contributes toward bridging the gap between high predictive performance and interpretable decision-making. Specifically, we propose the NDT-LIME variant that integrates Neural Decision Trees (NDTs) as surrogate models. By leveraging the structured, hierarchical nature of NDTs, our approach aims at providing more accurate and meaningful local explanations. We evaluate its effectiveness on several benchmark tabular datasets, showing consistent improvements in explanation fidelity over traditional LIME surrogates.