TabSHAP

arXiv cs.LG / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces TabSHAP, a model-agnostic interpretability framework for LLM-based tabular classifiers that aims to provide faithful, local explanations for high-stakes use cases.
  • TabSHAP adapts a Shapley-style sampled-coalition approach and uses Jensen–Shannon divergence between full-input and masked-input class distributions to measure each feature’s distributional impact.
  • To respect tabular meaning in prompts, it masks at the serialized key:value field level (atomic prompt elements) rather than masking individual subword tokens.
  • Experiments on Adult Income and Heart Disease show that TabSHAP produces significantly more faithful attributions than random baselines and XGBoost-based proxy explanations.
  • The authors also perform distance-metric ablations by recomputing attributions with KL or L1 (instead of JSD) and evaluating deletion faithfulness across metrics, with results cached per metric setting.

Abstract

Large Language Models (LLMs) fine-tuned on serialized tabular data are emerging as powerful alternatives to traditional tree-based models, particularly for heterogeneous or context-rich datasets. However, their deployment in high-stakes domains is hindered by a lack of faithful interpretability; existing methods often rely on global linear proxies or scalar probability shifts that fail to capture the model's full probabilistic uncertainty. In this work, we introduce TabSHAP, a model-agnostic interpretability framework designed to directly attribute local query decision logic in LLM-based tabular classifiers. By adapting a Shapley-style sampled-coalition estimator with Jensen-Shannon divergence between full-input and masked-input class distributions, TabSHAP quantifies the distributional impact of each feature rather than simple prediction flips. To align with tabular semantics, we mask at the level of serialized key:value fields (atomic in the prompt string), not individual subword tokens. Experimental validation on the Adult Income and Heart Disease benchmarks demonstrates that TabSHAP isolates critical diagnostic features, achieving significantly higher faithfulness than random baselines and XGBoost proxies. We further run a distance-metric ablation on the same test instances and TabSHAP settings: attributions are recomputed with KL or L1 replacing JSD in the similarity step (results cached per metric), and we compare deletion faithfulness across all three.