Neural Network Conversion of Machine Learning Pipelines

arXiv cs.LG / 3/27/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper extends student–teacher learning by transferring from a non-neural ML pipeline (as the teacher) to a neural network (as the student) to enable end-to-end joint optimization.
  • It focuses on replacing a random forest classifier with a student NN, aiming to consolidate multiple ML tasks under a single unified inference engine.
  • Experiments across 100 OpenML tasks show that the student NN can mimic the random-forest teacher for most tasks, provided appropriate NN hyper-parameters are selected.
  • The authors also study how random forests can be used to choose or guide NN hyper-parameters, linking the teacher model back into the NN training/tuning process.

Abstract

Transfer learning and knowledge distillation has recently gained a lot of attention in the deep learning community. One transfer approach, the student-teacher learning, has been shown to successfully create ``small'' student neural networks that mimic the performance of a much bigger and more complex ``teacher'' networks. In this paper, we investigate an extension to this approach and transfer from a non-neural-based machine learning pipeline as teacher to a neural network (NN) student, which would allow for joint optimization of the various pipeline components and a single unified inference engine for multiple ML tasks. In particular, we explore replacing the random forest classifier by transfer learning to a student NN. We experimented with various NN topologies on 100 OpenML tasks in which random forest has been one of the best solutions. Our results show that for the majority of the tasks, the student NN can indeed mimic the teacher if one can select the right NN hyper-parameters. We also investigated the use of random forest for selecting the right NN hyper-parameters.