Towards Real-World Document Parsing via Realistic Scene Synthesis and Document-Aware Training

arXiv cs.CV / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles failures of traditional and end-to-end document parsing systems under casually captured or non-standard document conditions by improving dataset quality and structure-aware training.
  • It introduces a data-training co-design approach: Realistic Scene Synthesis for generating large-scale, structurally diverse full-page end-to-end supervision and a Document-Aware Training Recipe using progressive learning and structure-token optimization.
  • The authors also create Wild-OmniDocBench, a benchmark built from real-world captured documents to evaluate robustness across diverse capture scenarios.
  • Experiments show that integrating the approach into a 1B-parameter multimodal LLM improves both accuracy and robustness on scanned/digital and real-world captured documents.
  • The work states that models, data synthesis pipelines, and benchmarks will be publicly released to support future research.

Abstract

Document parsing has recently advanced with multimodal large language models (MLLMs) that directly map document images to structured outputs. Traditional cascaded pipelines depend on precise layout analysis and often fail under casually captured or non-standard conditions. Although end-to-end approaches mitigate this dependency, they still exhibit repetitive, hallucinated, and structurally inconsistent predictions - primarily due to the scarcity of large-scale, high-quality full-page (document-level) end-to-end parsing data and the lack of structure-aware training strategies. To address these challenges, we propose a data-training co-design framework for robust end-to-end document parsing. A Realistic Scene Synthesis strategy constructs large-scale, structurally diverse full-page end-to-end supervision by composing layout templates with rich document elements, while a Document-Aware Training Recipe introduces progressive learning and structure-token optimization to enhance structural fidelity and decoding stability. We further build Wild-OmniDocBench, a benchmark derived from real-world captured documents for robustness evaluation. Integrated into a 1B-parameter MLLM, our method achieves superior accuracy and robustness across both scanned/digital and real-world captured scenarios. All models, data synthesis pipelines, and benchmarks will be publicly released to advance future research in document understanding.