Hessian-Enhanced Token Attribution (HETA): Interpreting Autoregressive LLMs

arXiv cs.AI / 4/16/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Hessian-Enhanced Token Attribution (HETA) to explain how input tokens contribute to outputs in decoder-only (autoregressive) LLMs, where prior methods often break down for causal generation dynamics.
  • HETA combines a semantic transition vector, Hessian-based second-order sensitivity scores, and KL-divergence-based information loss when masking tokens to produce context-aware and causally faithful attributions.
  • The framework is evaluated across multiple models and datasets, showing improved attribution faithfulness and better alignment with human annotations versus existing attribution approaches.
  • The authors also contribute a curated benchmark dataset to systematically assess attribution quality specifically for generative settings.

Abstract

Attribution methods seek to explain language model predictions by quantifying the contribution of input tokens to generated outputs. However, most existing techniques are designed for encoder-based architectures and rely on linear approximations that fail to capture the causal and semantic complexities of autoregressive generation in decoder-only models. To address these limitations, we propose Hessian-Enhanced Token Attribution (HETA), a novel attribution framework tailored for decoder-only language models. HETA combines three complementary components: a semantic transition vector that captures token-to-token influence across layers, Hessian-based sensitivity scores that model second-order effects, and KL divergence to measure information loss when tokens are masked. This unified design produces context-aware, causally faithful, and semantically grounded attributions. Additionally, we introduce a curated benchmark dataset for systematically evaluating attribution quality in generative settings. Empirical evaluations across multiple models and datasets demonstrate that HETA consistently outperforms existing methods in attribution faithfulness and alignment with human annotations, establishing a new standard for interpretability in autoregressive language models.