Contrastive Attribution in the Wild: An Interpretability Analysis of LLM Failures on Realistic Benchmarks

arXiv cs.AI / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing LLM interpretability research often analyzes failures only on short prompts or toy setups, leaving a gap for realistic, commonly used benchmarks.
  • It proposes “contrastive attribution,” a contrastive, LRP-based method that attributes the logit difference between an incorrect token and a correct alternative to input tokens and internal model states.
  • The authors introduce an efficient extension to build cross-layer attribution graphs, enabling analysis for long-context inputs.
  • They run a systematic empirical study across multiple benchmarks, comparing how attribution patterns vary by dataset, model size, and training checkpoint.
  • The findings indicate token-level contrastive attribution can provide useful signals in certain failure cases, but it is not reliably applicable across all scenarios, showing both value and limitations.

Abstract

Interpretability tools are increasingly used to analyze failures of Large Language Models (LLMs), yet prior work largely focuses on short prompts or toy settings, leaving their behavior on commonly used benchmarks underexplored. To address this gap, we study contrastive, LRP-based attribution as a practical tool for analyzing LLM failures in realistic settings. We formulate failure analysis as \textit{contrastive attribution}, attributing the logit difference between an incorrect output token and a correct alternative to input tokens and internal model states, and introduce an efficient extension that enables construction of cross-layer attribution graphs for long-context inputs. Using this framework, we conduct a systematic empirical study across benchmarks, comparing attribution patterns across datasets, model sizes, and training checkpoints. Our results show that this token-level contrastive attribution can yield informative signals in some failure cases, but is not universally applicable, highlighting both its utility and its limitations for realistic LLM failure analysis. Our code is available at: https://aka.ms/Debug-XAI.