Is my model perplexed for the right reason? Contrasting LLMs' Benchmark Behavior with Token-Level Perplexity

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard LLM benchmark scores don’t reveal whether models’ correct answers come from the intended underlying linguistic mechanisms, risking confirmation bias.
  • It proposes an interpretability framework using token-level perplexity distributions over minimal sentence pairs that differ by a few “pivotal” tokens.
  • The approach is designed to support hypothesis-driven analysis while avoiding unstable feature-attribution methods.
  • Experiments on controlled linguistic benchmarks with multiple open-weight LLMs find that linguistically important tokens affect behavior, but do not fully account for observed perplexity shifts.
  • The results suggest LLMs rely on additional heuristics beyond the expected linguistic cues, motivating further investigation into hidden factors driving benchmark performance.

Abstract

Standard evaluations of Large language models (LLMs) focus on task performance, offering limited insight into whether correct behavior reflects appropriate underlying mechanisms and risking confirmation bias. We introduce a simple, principled interpretability framework based on token-level perplexity to test whether models rely on linguistically relevant cues. By comparing perplexity distributions over minimal sentence pairs differing in one or a few `pivotal' tokens, our method enables precise, hypothesis-driven analysis without relying on unstable feature-attribution techniques. Experiments on controlled linguistic benchmarks with several open-weight LLMs show that, while linguistically important tokens influence model behavior, they never fully explain perplexity shifts, revealing that models rely on heuristics other than the expected linguistic ones.