Masked by Consensus: Disentangling Privileged Knowledge in LLM Correctness

arXiv cs.CL / 4/15/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tests whether LLMs have “privileged” internal information about answer correctness that is not recoverable from externally observable signals.
  • Experiments using correctness classifiers trained on a model’s own hidden states versus peer-model representations find no self-probing advantage on standard benchmarks.
  • The authors hypothesize this null result is explained by high agreement among models on which answers are correct.
  • On subsets where models disagree, they identify domain-specific privileged knowledge: self-representations improve factual knowledge accuracy but do not help math reasoning.
  • Layer-wise analysis shows the factual advantage increases from early to mid layers, suggesting memory-retrieval differences, while math reasoning provides no consistent benefit at any depth.

Abstract

Humans use introspection to evaluate their understanding through private internal states inaccessible to external observers. We investigate whether large language models possess similar privileged knowledge about answer correctness, information unavailable through external observation. We train correctness classifiers on question representations from both a model's own hidden states and external models, testing whether self-representations provide a performance advantage. On standard evaluation, we find no advantage: self-probes perform comparably to peer-model probes. We hypothesize this is due to high inter-model agreement of answer correctness. To isolate genuine privileged knowledge, we evaluate on disagreement subsets, where models produce conflicting predictions. Here, we discover domain-specific privileged knowledge: self-representations consistently outperform peer representations in factual knowledge tasks, but show no advantage in math reasoning. We further localize this domain asymmetry across model layers, finding that the factual advantage emerges progressively from early-to-mid layers onward, consistent with model-specific memory retrieval, while math reasoning shows no consistent advantage at any depth.