AI Navigate

ICE: Intervention-Consistent Explanation Evaluation with Statistical Grounding for LLMs

arXiv cs.CL / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • ICE introduces Intervention-Consistent Explanation (ICE), a framework that compares explanations against matched random baselines via randomized tests under multiple intervention operators, yielding win rates with confidence intervals.
  • It evaluates 7 LLMs across 4 English tasks, 6 non-English languages, and 2 attribution methods, and finds faithfulness is operator-dependent with gaps up to 44 percentage points, with deletion inflating estimates on short text but reversing on long text.
  • Randomized baselines reveal anti-faithfulness in about one-third of configurations, and faithfulness shows essentially no correlation with human plausibility.
  • The study highlights dramatic model-language interactions not explained by tokenization, and the authors release the ICE framework and ICEBench benchmark.

Abstract

Evaluating whether explanations faithfully reflect a model's reasoning remains an open problem. Existing benchmarks use single interventions without statistical testing, making it impossible to distinguish genuine faithfulness from chance-level performance. We introduce ICE (Intervention-Consistent Explanation), a framework that compares explanations against matched random baselines via randomization tests under multiple intervention operators, yielding win rates with confidence intervals. Evaluating 7 LLMs across 4 English tasks, 6 non-English languages, and 2 attribution methods, we find that faithfulness is operator-dependent: operator gaps reach up to 44 percentage points, with deletion typically inflating estimates on short text but the pattern reversing on long text, suggesting that faithfulness should be interpreted comparatively across intervention operators rather than as a single score. Randomized baselines reveal anti-faithfulness in one-third of configurations, and faithfulness shows zero correlation with human plausibility (|r| < 0.04). Multilingual evaluation reveals dramatic model-language interactions not explained by tokenization alone. We release the ICE framework and ICEBench benchmark.