HIVE: Hidden-Evidence Verification for Hallucination Detection in Diffusion Large Language Models

arXiv cs.CL / 4/30/2026

💬 OpinionModels & Research

Key Points

  • Diffusion-based large language models can reveal hallucination signals at intermediate denoising steps, not just in the final generated text.
  • The paper introduces HIVE, which extracts and compresses hidden evidence from denoising trajectories, selects the most informative step-layer evidence, and conditions a verifier language model using prefix embeddings.
  • HIVE outputs both a continuous hallucination score (from verifier logits) and structured verification results such as hallucination types, evidence pairs, and brief rationales.
  • Across two diffusion LLMs and three QA benchmarks, HIVE outperforms eight baseline methods, reaching up to 0.9236 AUROC and 0.9537 AUPRC.
  • Ablation experiments show that key components—hidden-evidence conditioning, learned evidence selection, two-stream evidence representation, and step-layer embeddings—are essential to the performance gains.

Abstract

Diffusion large language models generate text through multi-step denoising, where hallucination signals may emerge throughout the trajectory rather than only in the final output. Existing detectors mainly rely on output uncertainty or coarse trace statistics, which often fail to capture the richer hidden dynamics of D-LLMs. We propose HIVE, a hidden-evidence verification framework that extracts compressed hidden evidence from denoising trajectories, selects informative step-layer evidence, and conditions a verifier language model on the selected evidence through prefix embeddings. HIVE produces both a continuous hallucination score from verifier decision logits and structured verification outputs, including hallucination types, evidence pairs, and short rationales. Across two D-LLMs and three QA benchmarks, HIVE consistently outperforms eight strong baselines and achieves up to 0.9236 AUROC and 0.9537 AUPRC. Ablation studies further confirm the importance of hidden-evidence conditioning, learned evidence selection, two-stream evidence representation, and step-layer embeddings. These results suggest that selected hidden evidence from denoising trajectories provides a stronger and more usable hallucination signal than output-only uncertainty or coarse trace statistics.