AI Navigate

AILS-NTUA at SemEval-2026 Task 8: Evaluating Multi-Turn RAG Conversations

arXiv cs.CL / 3/12/2026

📰 NewsModels & Research

Key Points

  • The paper introduces the AILS-NTUA system for SemEval-2026 Task 8 (MTRAGEval), addressing all three subtasks: passage retrieval (A), reference-grounded response generation (B), and end-to-end RAG (C).
  • It proposes a query-diversity-over-retriever-diversity strategy using five LLM-based query reformulations to a single corpus-aligned sparse retriever, fused via variance-aware nested Reciprocal Rank Fusion.
  • The system employs a multistage generation pipeline that decomposes grounded generation into evidence span extraction, dual-candidate drafting, and calibrated multi-judge selection.
  • Empirically, it ranks 1st in Task A (nDCG@5: 0.5776, +20.5% over the strongest baseline) and 2nd in Task B (HM: 0.7698), with analysis showing that answerability calibration, rather than retrieval coverage, is the primary bottleneck for end-to-end performance.

Abstract

We present the AILS-NTUA system for SemEval-2026 Task 8 (MTRAGEval), addressing all three subtasks of multi-turn retrieval-augmented generation: passage retrieval (A), reference-grounded response generation (B), and end-to-end RAG (C). Our unified architecture is built on two principles: (i) a query-diversity-over-retriever-diversity strategy, where five complementary LLM-based query reformulations are issued to a single corpus-aligned sparse retriever and fused via variance-aware nested Reciprocal Rank Fusion; and (ii) a multistage generation pipeline that decomposes grounded generation into evidence span extraction, dual-candidate drafting, and calibrated multi-judge selection. Our system ranks 1st in Task A (nDCG@5: 0.5776, +20.5% over the strongest baseline) and 2nd in Task B (HM: 0.7698). Empirical analysis shows that query diversity over a well-aligned retriever outperforms heterogeneous retriever ensembling, and that answerability calibration-rather than retrieval coverage-is the primary bottleneck in end-to-end performance.