Retromorphic Testing with Hierarchical Verification for Hallucination Detection in RAG

arXiv cs.CL / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces RT4CHART, a retromorphic testing framework to detect hallucinations in retrieval-augmented generation (RAG) by assessing context-faithfulness against retrieved evidence.
  • RT4CHART decomposes LLM outputs into independently verifiable claims and uses hierarchical, local-to-global verification to label each claim as entailed, contradicted, or baseless.
  • It produces fine-grained, interpretable audits by mapping claim-level decisions back to specific answer spans and retrieving explicit supporting or refuting evidence from the context.
  • Experiments on the RAGTruth++ and newly re-annotated RAGTruth-Enhance benchmarks show strong improvements, including an answer-level hallucination detection F1 of 0.776 on RAGTruth++ and span-level F1 of 47.5% on RAGTruth-Enhance.
  • The authors’ re-annotation finds 1.68x more hallucination cases than prior labeling, indicating that existing benchmarks may understate hallucination prevalence and driving a need for more reliable evaluation datasets.

Abstract

Large language models (LLMs) continue to hallucinate in retrieval-augmented generation (RAG), producing claims that are unsupported by or conflict with the retrieved context. Detecting such errors remains challenging when faithfulness is evaluated solely with respect to the retrieved context. Existing approaches either provide coarse-grained, answer-level scores or focus on open-domain factuality, often lacking fine-grained, evidence-grounded diagnostics. We present RT4CHART, a retromorphic testing framework for context-faithfulness assessment. RT4CHART decomposes model outputs into independently verifiable claims and performs hierarchical, local-to-global verification against the retrieved context. Each claim is assigned one of three labels: entailed, contradicted, or baseless. Furthermore, RT4CHART maps claim-level decisions back to specific answer spans and retrieves explicit supporting or refuting evidence from the context, enabling fine-grained and interpretable auditing. We evaluate RT4CHART on RAGTruth++ (408 samples) and RAGTruth-Enhance (2,675 samples), a newly re-annotated benchmark. RT4CHART achieves the best answer-level hallucination detection F1 among all baselines. On RAGTruth++, it reaches an F1 score of 0.776, outperforming the strongest baseline by 83%. On RAGTruth-Enhance, it achieves a span-level F1 of 47.5%. Ablation studies show that the hierarchical verification design is the primary driver of performance gains. Finally, our re-annotation reveals 1.68x more hallucination cases than the original labels, suggesting that existing benchmarks substantially underestimate the prevalence of hallucinations.