Faithfulness-Aware Uncertainty Quantification for Fact-Checking the Output of Retrieval Augmented Generation

arXiv cs.CL / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Retrieval-Augmented Generation (RAG) can still produce hallucinations even when relevant context is retrieved, due to both model-internal inaccuracies and issues in retrieved evidence.
  • Prior hallucination-mitigation methods often treat factuality and faithfulness to retrieved evidence as the same, which can wrongly flag correct statements as hallucinations when retrieval does not explicitly support them.
  • The paper proposes FRANQ, a hallucination-detection method that performs uncertainty quantification (UQ) separately for factuality and faithfulness by conditioning on how well each statement aligns with the retrieved context.
  • To evaluate FRANQ, the authors create a new long-form QA dataset annotated for both factuality and faithfulness, using a mix of automated labeling and manual validation for difficult cases.
  • Experiments across multiple datasets, tasks, and LLMs indicate that FRANQ detects factual errors in RAG outputs more accurately than existing UQ-based approaches.

Abstract

Large Language Models (LLMs) enhanced with retrieval, an approach known as Retrieval-Augmented Generation (RAG), have achieved strong performance in open-domain question answering. However, RAG remains prone to hallucinations: factually incorrect outputs may arise from inaccuracies in the model's internal knowledge and the retrieved context. Existing approaches to mitigating hallucinations often conflate factuality with faithfulness to the retrieved evidence, incorrectly labeling factually correct statements as hallucinations if they are not explicitly supported by the retrieval. In this paper, we introduce FRANQ, a new method for hallucination detection in RAG outputs. FRANQ applies distinct uncertainty quantification (UQ) techniques to estimate factuality, conditioning on whether a statement is faithful to the retrieved context. To evaluate FRANQ and competing UQ methods, we construct a new long-form question answering dataset annotated for both factuality and faithfulness, combining automated labeling with manual validation of challenging cases. Extensive experiments across multiple datasets, tasks, and LLMs show that FRANQ achieves more accurate detection of factual errors in RAG-generated responses compared to existing approaches.