M-RAG: Making RAG Faster, Stronger, and More Efficient

arXiv cs.AI / 3/31/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes M-RAG, a chunk-free retrieval strategy for Retrieval-Augmented Generation (RAG) that addresses common issues caused by text chunking, such as fragmentation, retrieval noise, and inefficiency.
  • Instead of retrieving coarse text chunks, M-RAG extracts structured key-value (k-v) meta-markers with a lightweight, intent-aligned retrieval key for matching and a richer value for generation.
  • The approach aims to maintain expressive retrieval quality while enabling efficient and stable query-key similarity matching, decoupling retrieval representation from generation.
  • Experiments on LongBench subtasks show M-RAG improves performance over chunk-based RAG baselines across different token budgets, with particular gains in low-resource settings.
  • Additional analysis indicates M-RAG retrieves more answer-friendly evidence with higher efficiency, positioning it as a scalable, robust alternative to chunk-based methods.

Abstract

Retrieval-Augmented Generation (RAG) has become a widely adopted paradigm for enhancing the reliability of large language models (LLMs). However, RAG systems are sensitive to retrieval strategies that rely on text chunking to construct retrieval units, which often introduce information fragmentation, retrieval noise, and reduced efficiency. Recent work has even questioned the necessity of RAG, arguing that long-context LLMs may eliminate multi-stage retrieval pipelines by directly processing full documents. Nevertheless, expanded context capacity alone does not resolve the challenges of relevance filtering, evidence prioritization, and isolating answer-bearing information. To this end, we proposed M-RAG, a novel Chunk-free retrieval strategy. Instead of retrieving coarse-grained textual chunks, M-RAG extracts structured, k-v decomposition meta-markers, with a lightweight, intent-aligned retrieval key for retrieval and a context-rich information value for generation. Under this setting, M-RAG enables efficient and stable query-key similarity matching without sacrificing expressive ability. Experimental results on the LongBench subtasks demonstrate that M-RAG outperforms chunk-based RAG baselines across varying token budgets, particularly under low-resource settings. Extensive analysis further reveals that M-RAG retrieves more answer-friendly evidence with high efficiency, validating the effectiveness of decoupling retrieval representation from generation and highlighting the proposed strategy as a scalable and robust alternative to existing chunk-based methods.