HalluScan: A Systematic Benchmark for Detecting and Mitigating Hallucinations in Instruction-Following LLMs

arXiv cs.CL / 5/5/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces HalluScan, a systematic benchmark framework to evaluate how well instruction-following LLMs detect and mitigate hallucinations across 72 experimental configurations.
  • It proposes HalluScore, a composite metric that correlates with human expert judgments (Pearson r = 0.41), to better quantify hallucination-related quality.
  • The authors develop Adaptive Detection Routing (ADR), which reduces evaluation/processing cost by 2.0× while causing only a small AUROC drop of 0.1%.
  • Across experiments in multiple domains, NLI Verification achieves the best overall detection performance with AUROC 0.88, outperforming other tested methods such as RAV (AUROC 0.66).
  • The study also breaks down hallucination error cascades and finds that hallucination error types vary substantially by domain, motivating domain-aware mitigation strategies.

Abstract

Large Language Models (LLMs) have demonstrated remarkable capabilities across diverse natural language processing tasks, yet they remain susceptible to hallucinations -- generating content that is factually incorrect, unfaithful to provided context, or misaligned with user instructions. We present HalluScan, a comprehensive benchmark framework that systematically evaluates hallucination detection and mitigation across 72 configurations spanning 6 detection methods, 4 open-weight model families, and 3 diverse domains. We introduce three key contributions: (1) HalluScore, a novel composite metric that achieves a Pearson correlation of r = 0.41 with human expert judgments; (2) Adaptive Detection Routing (ADR), an intelligent routing algorithm achieving 2.0x cost reduction with only 0.1% AUROC degradation; and (3) systematic error cascade decomposition revealing substantial variation in hallucination error types across domains. Our experiments reveal that NLI Verification achieves the highest overall AUROC of 0.88, while RAV achieves the second-highest AUROC of 0.66.