Rethinking Retrieval-Augmented Generation as a Cooperative Decision-Making Problem

arXiv cs.CL / 4/27/2026

💬 OpinionModels & Research

Key Points

  • The paper argues that many Retrieval-Augmented Generation (RAG) systems rely on an asymmetric, ranking-centric design where generator quality depends heavily on reranker outputs.
  • It proposes Cooperative RAG (CoRAG), reframing the reranker and generator as peer decision-makers that jointly optimize toward a shared objective.
  • CoRAG coordinates reranking and generation so they operate in concert, aiming to improve the quality and consistency of final responses.
  • Experiments show CoRAG achieves good generalization and better generation stability, including when trained with only about 10K PopQA samples.
  • The authors release their CoRAG model on GitHub for others to reproduce and build upon.

Abstract

Retrieval-Augmented Generation (RAG) has demonstrated strong effectiveness in knowledge-intensive tasks by grounding language generation in external evidence. Despite its success, many existing RAG systems are built based on a ranking-centric, asymmetric dependency paradigm, where the generation quality of the generator is highly dependent on reranking results of the reranker. To overcome this limitation, we propose Cooperative Retrieval-Augmented Generation (CoRAG), a framework that treats the reranker and the generator as peer decision-makers rather than being connected through an asymmetric dependency pipeline. By jointly optimizing their behaviors toward a shared task objective, the reranker and generator are encouraged to cooperate, ensuring that document reranking and generation work in concert to improve the final response. Experimental results demonstrate good generalization and improved generation stability of CoRAG, even when the model is trained on only around 10K PopQA samples. Our model released in https://github.com/CoderrrSong/CoRAG.