Weakly Supervised Distillation of Hallucination Signals into Transformer Representations

arXiv cs.LG / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes distilling hallucination-detection supervision into transformer internal representations so that hallucination can be detected from hidden activations alone at inference time, avoiding external verification (retrieval, gold answers, or judge models) during deployment.
  • It introduces a weakly supervised labeling framework using substring matching, sentence-embedding similarity, and an LLM-as-judge to mark generated responses as grounded vs. hallucinated without human annotation, then builds a 15,000-example dataset from SQuAD v2 using LLaMA-2-7B generations and per-layer hidden states.
  • Five layer/projection-based probing models are trained directly on the collected hidden states, treating the grounding signals only as training-time supervision to test whether hallucination signals can be internalized by the representation.
  • Experimental results indicate that transformer-based probes provide the best discrimination, with specific architectures (notably M2 and M3) showing top performance on both cross-validation metrics and a held-out test set.
  • The approach is evaluated for latency, showing millisecond-scale probe overhead (batched and single-sample) and minimal impact on end-to-end generation+probe throughput, suggesting practical deployability.

Abstract

Existing hallucination detection methods for large language models (LLMs) rely on external verification at inference time, requiring gold answers, retrieval systems, or auxiliary judge models. We ask whether this external supervision can instead be distilled into the model's own representations during training, enabling hallucination detection from internal activations alone at inference time. We introduce a weak supervision framework that combines three complementary grounding signals: substring matching, sentence embedding similarity, and an LLM as a judge verdict to label generated responses as grounded or hallucinated without human annotation. Using this framework, we construct a 15000-sample dataset from SQuAD v2 (10500 train/development samples and a separate 5000-sample test set), where each example pairs a LLaMA-2-7B generated answer with its full per-layer hidden states and structured hallucination labels. We then train five probing classifiers: ProbeMLP (M0), LayerWiseMLP (M1), CrossLayerTransformer (M2), HierarchicalTransformer (M3), and CrossLayerAttentionTransformerV2 (M4), directly on these hidden states, treating external grounding signals as training-time supervision only. Our central hypothesis is that hallucination detection signals can be distilled into transformer representations, enabling internal detection without any external verification at inference time. Results support this hypothesis. Transformer-based probes achieve the strongest discrimination, with M2 performing best on 5-fold average AUC/F1, and M3 performing best on both single-fold validation and held-out test evaluation. We also benchmark inference efficiency: probe latency ranges from 0.15 to 5.62 ms (batched) and 1.55 to 6.66 ms (single sample), while end-to-end generation plus probe throughput remains approximately 0.231 queries per second, indicating negligible practical overhead.