Applied Explainability for Large Language Models: A Comparative Study

arXiv cs.AI / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies three existing explainability techniques—Integrated Gradients, Attention Rollout, and SHAP—to address the interpretability gap of large language models.
  • Experiments are conducted under a consistent, reproducible setup using a fine-tuned DistilBERT model for SST-2 sentiment classification, enabling fair comparison of techniques.
  • The findings indicate that gradient-based attribution yields more stable and intuitive explanations, whereas attention-based approaches are faster but may not align well with prediction-relevant features.
  • Model-agnostic methods like SHAP provide flexibility across model types but come with higher computational cost and greater variability.
  • The study concludes that explainability tools are best used as diagnostic aids rather than definitive explanations, highlighting trade-offs that matter for trust, debugging, and deployment.

Abstract

Large language models (LLMs) achieve strong performance across many natural language processing tasks, yet their decision processes remain difficult to interpret. This lack of transparency creates challenges for trust, debugging, and deployment in real-world systems. This paper presents an applied comparative study of three explainability techniques: Integrated Gradients, Attention Rollout, and SHAP, on a fine-tuned DistilBERT model for SST-2 sentiment classification. Rather than proposing new methods, the focus is on evaluating the practical behavior of existing approaches under a consistent and reproducible setup. The results show that gradient-based attribution provides more stable and intuitive explanations, while attention-based methods are computationally efficient but less aligned with prediction-relevant features. Model-agnostic approaches offer flexibility but introduce higher computational cost and variability. This work highlights key trade-offs between explainability methods and emphasizes their role as diagnostic tools rather than definitive explanations. The findings provide practical insights for researchers and engineers working with transformer-based NLP systems. This is a preprint and has not undergone peer review.