RARE: Redundancy-Aware Retrieval Evaluation Framework for High-Similarity Corpora

arXiv cs.CL / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Existing QA and retrieval benchmarks often assume low document overlap, which can make evaluation results unreliable for real-world RAG corpora with highly redundant, high-similarity documents.
  • The paper introduces RARE (Redundancy-Aware Retrieval Evaluation), which builds more realistic benchmarks by decomposing documents into atomic facts for redundancy tracking and by using CRRF to improve LLM-generated benchmark data.
  • CRRF scores multiple quality criteria separately and fuses them by rank, helping avoid trivial or low-quality outputs from LLMs when generating benchmark data.
  • Experiments on Finance, Legal, and Patent corpora via RedQA show large drops in retriever performance on deeper, high-hop tasks compared with standard benchmarks, exposing robustness gaps they fail to capture.
  • RARE is positioned as a framework that practitioners can use to create domain-specific RAG evaluations that better match deployment conditions.

Abstract

Existing QA benchmarks typically assume distinct documents with minimal overlap, yet real-world retrieval-augmented generation (RAG) systems operate on corpora such as financial reports, legal codes, and patents, where information is highly redundant and documents exhibit strong inter-document similarity. This mismatch undermines evaluation validity: retrievers can be unfairly undervalued even when they retrieve documents that provide sufficient evidence, because redundancy across documents is not accounted for in evaluation. On the other hand, retrievers that perform well on standard benchmarks often generalize poorly to real-world corpora with highly similar and redundant documents. We present RARE (Redundancy-Aware Retrieval Evaluation), a framework for constructing realistic benchmarks by (i) decomposing documents into atomic facts to enable precise redundancy tracking and (ii) enhancing LLM-based data generation with CRRF. RAG benchmark data usually requires multiple quality criteria, but LLMs often yield trivial outputs. CRRF scores criteria separately and fuses decisions by rank, improving the reliability of generated data. Applying RARE to Finance, Legal, and Patent corpora, we introduce RedQA, where a strong retriever baseline drops from 66.4% PerfRecall@10 on 4-hop General-Wiki to 5.0-27.9% PerfRecall@10 at 4-hop depth, revealing robustness gaps that current benchmarks fail to capture. RARE enables practitioners to build domain-specific RAG evaluations that faithfully reflect real-world deployment conditions.