Detecting and Correcting Reference Hallucinations in Commercial LLMs and Deep Research Agents

arXiv cs.CL / 4/6/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper systematically measures citation URL validity in commercial LLMs and deep research agents using DRBench (53,090 URLs) and ExpertQA (168,021 URLs), focusing on whether citation URLs are hallucinated or non-resolving.
  • It finds that 3–13% of citation URLs appear hallucinated (no record in the Wayback Machine) and 5–18% are non-resolving overall, with large differences by domain (e.g., Business vs. Theology) and by model/agent.
  • Deep research agents tend to generate more citations per query than search-augmented LLMs, but they also hallucinate URLs at higher rates.
  • The authors break down failure modes, showing that some models fabricate non-resolving URLs entirely while others produce links that reflect real retrieval but suffer from link-rot.
  • They release urlhealth, an open-source Wayback-machine-based tool for classifying URLs as stale vs. hallucinated, and show that agent self-correction with urlhealth can reduce non-resolving citations by 6–79× to under 1%, depending on the model’s tool-use competence.

Abstract

Large language models and deep research agents supply citation URLs to support their claims, yet the reliability of these citations has not been systematically measured. We address six research questions about citation URL validity using 10 models and agents on DRBench (53,090 URLs) and 3 models on ExpertQA (168,021 URLs across 32 academic fields). We find that 3--13\% of citation URLs are hallucinated -- they have no record in the Wayback Machine and likely never existed -- while 5--18\% are non-resolving overall. Deep research agents generate substantially more citations per query than search-augmented LLMs but hallucinate URLs at higher rates. Domain effects are pronounced: non-resolving rates range from 5.4\% (Business) to 11.4\% (Theology), with per-model effects even larger. Decomposing failures reveals that some models fabricate every non-resolving URL, while others show substantial link-rot fractions indicating genuine retrieval. As a solution, we release urlhealth, an open-source tool for URL liveness checking and stale-vs-hallucinated classification using the Wayback Machine. In agentic self-correction experiments, models equipped with urlhealth reduce non-resolving citation URLs by 6\textrm{--}79\times to under 1\%, though effectiveness depends on the model's tool-use competence. The tool and all data are publicly available. Our characterization findings, failure taxonomy, and open-source tooling establish that citation URL validity is both measurable at scale and correctable in practice.