AI Navigate

SHAPCA: Consistent and Interpretable Explanations for Machine Learning Models on Spectroscopy Data

arXiv cs.LG / 3/20/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • SHAPCA is a new explainable ML pipeline that combines Principal Component Analysis (PCA) for dimensionality reduction with SHAP (Shapley Additive exPlanations) to provide explanations in the original input space for spectroscopy data.
  • The approach tackles high dimensionality and strong collinearity in spectroscopy data to improve the stability and consistency of model explanations across multiple training runs.
  • It enables both global and local explanations, highlighting spectral bands that drive overall model behavior as well as instance-specific features that influence individual predictions.
  • The framework aims to enhance interpretability by linking explanations back to underlying biological components and demonstrates numerical evidence of greater consistency across runs.

Abstract

In recent years, machine learning models have been increasingly applied to spectroscopic datasets for chemical and biomedical analysis. For their successful adoption, particularly in clinical and safety-critical settings, professionals and researchers must be able to understand and trust the reasoning behind model predictions. However, the inherently high dimensionality and strong collinearity of spectroscopy data pose a fundamental challenge to model explainability. These properties not only complicate model training but also undermine the stability and consistency of explanations, leading to fluctuations in feature importance across repeated training runs. Feature extraction techniques have been used to reduce the input dimensionality; these new features hinder the connection between the prediction and the original signal. This study proposes SHAPCA, an explainable machine learning pipeline that combines Principal Component Analysis (for dimensionality reduction) and Shapely Additive exPlanations (for post hoc explanation) to provide explanations in the original input space, which a practitioner can interpret and link back to the biological components. The proposed framework enables analysis from both global and local perspectives, revealing the spectral bands that drive overall model behaviour as well as the instance-specific features that influence individual predictions. Numerical analysis demonstrated the interpretability of the results and greater consistency across different runs.