AI Navigate

QChunker: Learning Question-Aware Text Chunking for Domain RAG via Multi-Agent Debate

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes QChunker, which reframes the RAG paradigm as understanding-retrieval-augmentation by chunking text through segmentation and knowledge completion to ensure semantic integrity.
  • It introduces a four-agent debate framework consisting of a question outline generator, text segmenter, integrity reviewer, and knowledge completer, leveraging questions to drive deeper insights.
  • The approach creates a 45K-entry dataset and demonstrates transfer to small language models, along with a new direct evaluation metric called ChunkScore for chunk-quality assessment.
  • By using document outlines and multi-path sampling to generate multiple candidate chunks and selecting the best with ChunkScore, QChunker achieves more coherent and information-rich chunks across multiple domains.

Abstract

The effectiveness upper bound of retrieval-augmented generation (RAG) is fundamentally constrained by the semantic integrity and information granularity of text chunks in its knowledge base. To address these challenges, this paper proposes QChunker, which restructures the RAG paradigm from retrieval-augmentation to understanding-retrieval-augmentation. Firstly, QChunker models the text chunking as a composite task of text segmentation and knowledge completion to ensure the logical coherence and integrity of text chunks. Drawing inspiration from Hal Gregersen's "Questions Are the Answer" theory, we design a multi-agent debate framework comprising four specialized components: a question outline generator, text segmenter, integrity reviewer, and knowledge completer. This framework operates on the principle that questions serve as catalysts for profound insights. Through this pipeline, we successfully construct a high-quality dataset of 45K entries and transfer this capability to small language models. Additionally, to handle long evaluation chains and low efficiency in existing chunking evaluation methods, which overly rely on downstream QA tasks, we introduce a novel direct evaluation metric, ChunkScore. Both theoretical and experimental validations demonstrate that ChunkScore can directly and efficiently discriminate the quality of text chunks. Furthermore, during the text segmentation phase, we utilize document outlines for multi-path sampling to generate multiple candidate chunks and select the optimal solution employing ChunkScore. Extensive experimental results across four heterogeneous domains exhibit that QChunker effectively resolves aforementioned issues by providing RAG with more logically coherent and information-rich text chunks.