AI Navigate

LLMs as Signal Detectors: Sensitivity, Bias, and the Temperature-Criterion Analogy

arXiv cs.CL / 3/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that calibration metrics for LLMs conflate sensitivity and bias, and proposes using Signal Detection Theory (SDT) to separate these components for more precise evaluation.
  • It employs a full parametric SDT framework (unequal-variance modeling, criterion estimation, and z-ROC analysis) across 168,000 trials and three LLMs.
  • It investigates whether temperature functions as a criterion shift (as with payoff manipulations in human psychophysics) and finds that this analogy can break down because temperature also changes the generated output.
  • The results show unequal-variance evidence distributions among models, with instruct models exhibiting more pronounced asymmetry in z-ROC slopes, and demonstrate that calibration metrics alone cannot distinguish models in sensitivity vs. bias, highlighting the value of the full SDT framework.

Abstract

Large language models (LLMs) are evaluated for calibration using metrics such as Expected Calibration Error that conflate two distinct components: the model's ability to discriminate correct from incorrect answers (sensitivity) and its tendency toward confident or cautious responding (bias). Signal Detection Theory (SDT) decomposes these components. While SDT-derived metrics such as AUROC are increasingly used, the full parametric framework - unequal-variance model fitting, criterion estimation, z-ROC analysis - has not been applied to LLMs as signal detectors. In this pre-registered study, we treat three LLMs as observers performing factual discrimination across 168,000 trials and test whether temperature functions as a criterion shift analogous to payoff manipulations in human psychophysics. Critically, this analogy may break down because temperature changes the generated answer itself, not only the confidence assigned to it. Our results confirm the breakdown with temperature simultaneously increasing sensitivity (AUC) and shifting criterion. All models exhibited unequal-variance evidence distributions (z-ROC slopes 0.52-0.84), with instruct models showing more extreme asymmetry (0.52-0.63) than the base model (0.77-0.87) or human recognition memory (~0.80). The SDT decomposition revealed that models occupying distinct positions in sensitivity-bias space could not be distinguished by calibration metrics alone, demonstrating that the full parametric framework provides diagnostic information unavailable from existing metrics.