Remaining Useful Life Estimation for Turbofan Engines: A Comparative Study of Classical, CNN, and LSTM Approaches

arXiv cs.LG / 5/1/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper compares classical models (Ridge Regression, Polynomial Ridge, XGBoost) with deep learning approaches (1D CNN and LSTM) for Remaining Useful Life (RUL) estimation in turbofan engines using the NASA C-MAPSS dataset.
  • On the FD001 and FD003 subsets, the LSTM achieves RMSE of 14.93 and 14.20, respectively, outperforming a previously reported deeper LSTM from Zheng et al. despite using a simpler single-layer architecture.
  • The 1D CNN attains RMSE of 16.97 on FD001 and 15.68 on FD003, showing strong competitiveness on FD003 but yielding more conservative RUL estimates on FD001.
  • The study evaluates Ridge Regression both with raw sequences and engineered features, while other classical baselines rely only on engineered inputs, with XGBoost achieving particularly strong performance on FD003 (RMSE 13.36).
  • All models are assessed under the same preprocessing pipeline to maintain a fair, apples-to-apples comparison across approaches and dataset subsets.

Abstract

Remaining Useful Life (RUL) estimation is a critical component of Prognostics and Health Management (PHM), enabling proactive maintenance scheduling and reducing unplanned failures in industrial equipment. This paper presents a comparative study of machine learning approaches for RUL estimation on the NASA C-MAPSS turbofan engine dataset: classical baselines (Ridge Regression, Polynomial Ridge, and XGBoost), a 1D Convolutional Neural Network (CNN), and a Long Short-Term Memory (LSTM) network. All models are evaluated on the FD001 and FD003 subsets under an identical preprocessing pipeline to ensure a fair comparison. Among raw-sequence models, the LSTM achieves RMSE of 14.93 and 14.20 on FD001 and FD003 respectively, outperforming the deep LSTM reported by Zheng et al.~\cite{paper} (RMSE 16.14 and 16.18) despite using a simpler single-layer architecture. The 1D CNN achieves RMSE of 16.97 on FD001 and 15.68 on FD003, demonstrating competitive performance on FD003 while producing more conservative RUL predictions on FD001. Ridge Regression is evaluated on raw and engineered features, while other classical models use only engineered inputs. XGBoost achieves an RMSE of 13.36 on FD003, highlighting the competitiveness of nonlinear modeling.