Sample Transform Cost-Based Training-Free Hallucination Detector for Large Language Models

arXiv cs.AI / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a training-free hallucination detector for large language models by using distribution complexity inferred from prompt-conditioned responses.
  • It computes Wasserstein (optimal-transport) distances between sets of token embeddings from pairwise samples to build a Wasserstein distance matrix that reflects transformation costs.
  • The authors derive two complementary signals—AvgWD (average transformation cost) and EigenWD (cost-complexity via eigen-structure)—to quantify likelihood of hallucination.
  • The method is extended to black-box LLM settings using a “teacher forcing” approach with an accessible teacher model.
  • Experiments on multiple models and datasets show AvgWD and EigenWD are competitive with strong uncertainty baselines and exhibit complementary behaviors, supporting “distribution complexity” as a truthfulness signal.

Abstract

Hallucinations in large language models (LLMs) remain a central obstacle to trustworthy deployment, motivating detectors that are accurate, lightweight, and broadly applicable. Since an LLM with a prompt defines a conditional distribution, we argue that the complexity of the distribution is an indicator of hallucination. However, the density of the distribution is unknown and the samples (i.e., responses generated for the prompt) are discrete distributions, which leads to a significant challenge in quantifying the complexity of the distribution. We propose to compute the optimal-transport distances between the sets of token embeddings of pairwise samples, which yields a Wasserstein distance matrix measuring the costs of transforming between the samples. This Wasserstein distance matrix provides a means to quantify the complexity of the distribution defined by the LLM with the prompt. Based on the Wasserstein distance matrix, we derive two complementary signals: AvgWD, measuring the average cost, and EigenWD, measuring the cost complexity. This leads to a training-free detector for hallucinations in LLMs. We further extend the framework to black-box LLMs via teacher forcing with an accessible teacher model. Experiments show that AvgWD and EigenWD are competitive with strong uncertainty baselines and provide complementary behavior across models and datasets, highlighting distribution complexity as an effective signal for LLM truthfulness.