CoCR-RAG: Enhancing Retrieval-Augmented Generation in Web Q&A via Concept-oriented Context Reconstruction

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces CoCR-RAG, a framework to improve web Q&A retrieval-augmented generation by reconstructing a coherent, knowledge-dense context from heterogeneous multi-source documents.
  • It uses a concept distillation step based on Abstract Meaning Representation (AMR) to extract stable, linguistically grounded concepts from retrieved texts before fusing them.
  • Large language models then reconstruct a unified context by supplementing only the necessary sentence elements, aiming to reduce redundancy and irrelevant content that can harm factual consistency.
  • Experiments on PopQA and EntityQuestions show CoCR-RAG significantly outperforms prior context-reconstruction approaches and remains robust across different backbone LLMs, suggesting it can serve as a plug-and-play RAG component.

Abstract

Retrieval-augmented generation (RAG) has shown promising results in enhancing Q&A by incorporating information from the web and other external sources. However, the supporting documents retrieved from the heterogeneous web often originate from multiple sources with diverse writing styles, varying formats, and inconsistent granularity. Fusing such multi-source documents into a coherent and knowledge-intensive context remains a significant challenge, as the presence of irrelevant and redundant information can compromise the factual consistency of the inferred answers. This paper proposes the Concept-oriented Context Reconstruction RAG (CoCR-RAG), a framework that addresses the multi-source information fusion problem in RAG through linguistically grounded concept-level integration. Specifically, we introduce a concept distillation algorithm that extracts essential concepts from Abstract Meaning Representation (AMR), a stable semantic representation that structures the meaning of texts as logical graphs. The distilled concepts from multiple retrieved documents are then fused and reconstructed into a unified, information-intensive context by Large Language Models, which supplement only the necessary sentence elements to highlight the core knowledge. Experiments on the PopQA and EntityQuestions datasets demonstrate that CoCR-RAG significantly outperforms existing context-reconstruction methods across these Web Q&A benchmarks. Furthermore, CoCR-RAG shows robustness across various backbone LLMs, establishing itself as a flexible, plug-and-play component adaptable to different RAG frameworks.