URAG: A Benchmark for Uncertainty Quantification in Retrieval-Augmented Large Language Models

arXiv cs.AI / 3/23/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • URAG is a new benchmark designed to quantify uncertainty in retrieval-augmented generation (RAG) systems across domains like healthcare, programming, science, math, and general text.
  • The benchmark reformulates open-ended generation tasks as multiple-choice questions to enable principled uncertainty quantification using conformal prediction and evaluates performance with accuracy and predicted-set size via LAC and APS metrics.
  • Across 8 standard RAG methods, URAG shows that accuracy gains often come with reduced uncertainty, but this relationship degrades under retrieval noise; simpler modular RAG methods tend to offer better accuracy-uncertainty trade-offs than more complex reasoning pipelines, with no single approach universally reliable across domains.
  • The study also finds that retrieval depth, dependence on parametric knowledge, and exposure to confidence cues can amplify confident errors and hallucinations, and provides a GitHub-hosted codebase for reproducibility.

Abstract

Retrieval-Augmented Generation (RAG) has emerged as a widely adopted approach for enhancing LLMs in scenarios that demand extensive factual knowledge. However, current RAG evaluations concentrate primarily on correctness, which may not fully capture the impact of retrieval on LLM uncertainty and reliability. To bridge this gap, we introduce URAG, a comprehensive benchmark designed to assess the uncertainty of RAG systems across various fields like healthcare, programming, science, math, and general text. By reformulating open-ended generation tasks into multiple-choice question answering, URAG allows for principled uncertainty quantification via conformal prediction. We apply the evaluation pipeline to 8 standard RAG methods, measuring their performance through both accuracy and prediction-set sizes based on LAC and APS metrics. Our analysis shows that (1) accuracy gains often coincide with reduced uncertainty, but this relationship breaks under retrieval noise; (2) simple modular RAG methods tend to offer better accuracy-uncertainty trade-offs than more complex reasoning pipelines; and (3) no single RAG approach is universally reliable across domains. We further show that (4) retrieval depth, parametric knowledge dependence, and exposure to confidence cues can amplify confident errors and hallucinations. Ultimately, URAG establishes a systematic benchmark for analyzing and enhancing the trustworthiness of retrieval-augmented systems. Our code is available on GitHub.