Extracting Interpretable Models from Tree Ensembles: Computational and Statistical Perspectives

arXiv stat.ML / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a new estimator that extracts compact sets of interpretable decision rules from tree ensemble models while preserving predictive accuracy.
  • It adds a key capability to jointly tune the number of extracted rules and the interaction depth of each rule, which the authors show improves accuracy.
  • The work includes an exact optimization algorithm for the estimator’s core problem and an approximate method for computing regularization paths (solutions across different model sizes).
  • The authors provide non-asymptotic prediction error bounds, showing large-sample performance comparable to an oracle baseline that optimally combines ensemble rules under the same complexity constraint.
  • Experiments indicate the proposed rule-extraction approach outperforms existing algorithms for turning tree ensembles into interpretable models.

Abstract

Tree ensembles are non-parametric methods widely recognized for their accuracy and ability to capture complex interactions. While these models excel at prediction, they are difficult to interpret and may fail to uncover useful relationships in the data. We propose an estimator to extract compact sets of decision rules from tree ensembles. The extracted models are accurate and can be manually examined to reveal relationships between the predictors and the response. A key novelty of our estimator is the flexibility to jointly control the number of rules extracted and the interaction depth of each rule, which improves accuracy. We develop a tailored exact algorithm to efficiently solve optimization problems underlying our estimator and an approximate algorithm for computing regularization paths, sequences of solutions that correspond to varying model sizes. We also establish novel non-asymptotic prediction error bounds for our proposed approach, comparing it to an oracle that chooses the best data-dependent linear combination of the rules in the ensemble subject to the same complexity constraint as our estimator. The bounds illustrate that the large-sample predictive performance of our estimator is on par with that of the oracle. Through experiments, we demonstrate that our estimator outperforms existing algorithms for rule extraction.