An Isotropic Approach to Efficient Uncertainty Quantification with Gradient Norms

arXiv cs.CL / 4/1/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a lightweight uncertainty quantification method for neural networks that avoids expensive computation and does not require training data access, targeting settings like large language models.
  • It approximates epistemic uncertainty using a first-order Taylor expansion where uncertainty is proportional to the squared gradient norm, and models aleatoric uncertainty using the Bernoulli variance of the point prediction.
  • The approach assumes parameter covariance is isotropic, arguing this avoids structured distortions that arise when estimating covariance from non-training data.
  • Empirical validation on synthetic problems shows uncertainty estimates closely match Markov Chain Monte Carlo references, with performance improving as model size increases.
  • The authors apply the estimates to question answering and find uncertainty types provide different predictive signals by benchmark: combined uncertainty yields strong AUROC on TruthfulQA but is near chance on TriviaQA, outperforming neither self-assessment uniformly.

Abstract

Existing methods for quantifying predictive uncertainty in neural networks are either computationally intractable for large language models or require access to training data that is typically unavailable. We derive a lightweight alternative through two approximations: a first-order Taylor expansion that expresses uncertainty in terms of the gradient of the prediction and the parameter covariance, and an isotropy assumption on the parameter covariance. Together, these yield epistemic uncertainty as the squared gradient norm and aleatoric uncertainty as the Bernoulli variance of the point prediction, from a single forward-backward pass through an unmodified pretrained model. We justify the isotropy assumption by showing that covariance estimates built from non-training data introduce structured distortions that isotropic covariance avoids, and that theoretical results on the spectral properties of large networks support the approximation at scale. Validation against reference Markov Chain Monte Carlo estimates on synthetic problems shows strong correspondence that improves with model size. We then use the estimates to investigate when each uncertainty type carries useful signal for predicting answer correctness in question answering with large language models, revealing a benchmark-dependent divergence: the combined estimate achieves the highest mean AUROC on TruthfulQA, where questions involve genuine conflict between plausible answers, but falls to near chance on TriviaQA's factual recall, suggesting that parameter-level uncertainty captures a fundamentally different signal than self-assessment methods.