H-RAG at SemEval-2026 Task 8: Hierarchical Parent-Child Retrieval for Multi-Turn RAG Conversations
arXiv cs.CL / 5/4/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces H-RAG, a hierarchical parent-child retrieval pipeline submitted to SemEval-2026 Task 8 (MTRAGEval), covering both retrieval quality (Task A) and multi-turn RAG generation with evidence grounding (Task C).
- H-RAG splits documents into overlapping sentence-based “child” chunks for fine-grained retrieval, while retaining full documents as “parent” units to reconstruct coherent context during generation.
- The retrieval stage uses a hybrid dense-sparse search with tunable weighting plus embedding-similarity rescoring over child chunks, then aggregates retrieved evidence at the parent level for the language model.
- Reported results show nDCG@5 of 0.4271 on Task A and a harmonic mean of 0.3241 on Task C, highlighting that retrieval configuration and parent-level evidence aggregation are critical for multi-turn RAG performance.
Related Articles
A very basic litmus test for LLMs "ok give me a python program that reads my c: and put names and folders in a sorted list from biggest to small"
Reddit r/LocalLLaMA

ALM on Power Platform: ADO + GitHub, the best of both worlds
Dev.to

Experiment: Does repeated usage influence ChatGPT 5.4 outputs in a RAG-like setup?
Dev.to

Find 12 high-volume, low-competition GEO content topics Topify.ai should rank on
Dev.to

When a memorized rule fits your bug too well: a meta-trap of agent workflows
Dev.to