AI Navigate

MDER-DR: Multi-Hop Question Answering with Entity-Centric Summaries

arXiv cs.AI / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents MDER-DR, a KG-based QA framework designed to improve multi-hop question answering by preserving contextual nuance with context-derived triple descriptions and entity-level summaries, removing the need for explicit graph-edge traversal during retrieval.
  • It combines Map-Disambiguate-Enrich-Reduce (MDER) for indexing, which generates enriched triple descriptions, with Decompose-Resolve (DR) as a retrieval mechanism that decomposes queries into resolvable triples and grounds them in the KG via iterative reasoning.
  • The proposed pipeline is domain-agnostic and LLM-driven, showing substantial improvements over standard RAG baselines (up to 66%) and demonstrating cross-lingual robustness on both standard and domain-specific benchmarks.
  • The authors provide open-source code at GitHub to facilitate replication and adaptation to various KG-based QA scenarios.

Abstract

Retrieval-Augmented Generation (RAG) over Knowledge Graphs (KGs) suffers from the fact that indexing approaches may lose important contextual nuance when text is reduced to triples, thereby degrading performance in downstream Question-Answering (QA) tasks, particularly for multi-hop QA, which requires composing answers from multiple entities, facts, or relations. We propose a domain-agnostic, KG-based QA framework that covers both the indexing and retrieval/inference phases. A new indexing approach called Map-Disambiguate-Enrich-Reduce (MDER) generates context-derived triple descriptions and subsequently integrates them with entity-level summaries, thus avoiding the need for explicit traversal of edges in the graph during the QA retrieval phase. Complementing this, we introduce Decompose-Resolve (DR), a retrieval mechanism that decomposes user queries into resolvable triples and grounds them in the KG via iterative reasoning. Together, MDER and DR form an LLM-driven QA pipeline that is robust to sparse, incomplete, and complex relational data. Experiments show that on standard and domain specific benchmarks, MDER-DR achieves substantial improvements over standard RAG baselines (up to 66%), while maintaining cross-lingual robustness. Our code is available at https://github.com/DataSciencePolimi/MDER-DR_RAG.