FGR-ColBERT: Identifying Fine-Grained Relevance Tokens During Retrieval

arXiv cs.CL / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard document retrieval often fails to provide fine-grained evidence by identifying only the document-level relevance rather than specific relevant spans.
  • It introduces FGR-ColBERT, a modification of the ColBERT retrieval model that distills fine-grained relevance cues from an LLM and incorporates them directly into retrieval to avoid expensive post-retrieval LLM reranking.
  • Experiments on MS MARCO show FGR-ColBERT (110M) reaches token-level F1 of 64.5, outperforming Gemma 2 (27B) at 62.8 while being roughly 245x smaller.
  • The approach maintains strong retrieval quality, preserving relative Recall@50 at 99% of baseline while adding only about 1.12x latency overhead versus the original ColBERT.
  • Overall, the work presents a practical pathway to achieve token-level evidence signals with retrieval efficiency comparable to existing late-interaction retrieval models.

Abstract

Document retrieval identifies relevant documents but does not provide fine-grained evidence cues, such as specific relevant spans. A possible solution is to apply an LLM after retrieval; however, this introduces significant computational overhead and limits practical deployment. We propose FGR-ColBERT, a modification of ColBERT retrieval model that integrates fine-grained relevance signals distilled from an LLM directly into the retrieval function. Experiments on MS MARCO show that FGR-ColBERT (110M) achieves a token-level F1 of 64.5, exceeding the 62.8 of Gemma 2 (27B), despite being approximately 245 times smaller. At the same time, it preserves retrieval effectiveness (99% relative Recall@50) and remains efficient, incurring only a ~1.12x latency overhead compared to the original ColBERT.