From PDF to RAG-Ready: Evaluating Document Conversion Frameworks for Domain-Specific Question Answering

arXiv cs.AI / 4/8/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper finds that Retrieval-Augmented Generation (RAG) performance is driven more by document preprocessing choices than by the specific PDF-to-Markdown conversion framework used.
  • It systematically benchmarks four open-source PDF conversion approaches (Docling, MinerU, Marker, DeepSeek OCR) across 19 pipeline configurations on a 50-question benchmark from 36 Portuguese administrative documents, using LLM-as-judge scoring averaged over 10 runs.
  • The best automated accuracy is achieved by Docling with hierarchical splitting and image descriptions (94.1%), outperforming the other conversion frameworks.
  • Metadata enrichment and hierarchy-aware chunking improve QA accuracy more than conversion tool selection alone, and font-based hierarchy rebuilding consistently beats LLM-based hierarchy reconstruction.
  • An exploratory GraphRAG setup underperforms basic RAG (82%), implying that naive knowledge-graph construction without strong ontological guidance can add complexity without benefit.

Abstract

Retrieval-Augmented Generation (RAG) systems depend critically on the quality of document preprocessing, yet no prior study has evaluated PDF processing frameworks by their impact on downstream question-answering accuracy. We address this gap through a systematic comparison of four open-source PDF-to-Markdown conversion frameworks, Docling, MinerU, Marker, and DeepSeek OCR, across 19 pipeline configurations for extracting text and other contents from PDFs, varying the conversion tool, cleaning transformations, splitting strategy, and metadata enrichment. Evaluation was performed using a manually curated 50-question benchmark over a corpus of 36 Portuguese administrative documents (1,706 pages, ~492K words), with LLM-as-judge scoring averaged over 10 runs. Two baselines bounded the results: na\"ive PDFLoader (86.9%) and manually curated Markdown (97.1%). Docling with hierarchical splitting and image descriptions achieved the highest automated accuracy (94.1%). Metadata enrichment and hierarchy-aware chunking contributed more to accuracy than the conversion framework choice alone. Font-based hierarchy rebuilding consistently outperformed LLM-based approaches. An exploratory GraphRAG implementation scored only 82%, underperforming basic RAG, suggesting that na\"ive knowledge graph construction without ontological guidance does not yet justify its added complexity. These findings demonstrate that data preparation quality is the dominant factor in RAG system performance.