Boosting Document Parsing Efficiency and Performance with Coarse-to-Fine Visual Processing

arXiv cs.CV / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that high-resolution document parsing is inefficient because vision token counts (and compute) grow quadratically, despite many redundant visual regions in documents such as backgrounds.
  • It proposes PaddleOCR-VL, a coarse-to-fine visual processing architecture that suppresses redundant regions and concentrates compute on semantically valid parts of the page.
  • A lightweight Valid Region Focus Module (VRFM) is introduced to predict and localize valid vision tokens using localization and contextual relationship signals.
  • The system is paired with a compact 0.9B vision-language model (PaddleOCR-VL-0.9B) for detailed recognition, guided by VRFM outputs to avoid processing the entire image directly.
  • Experiments report state-of-the-art page-level parsing and element-level recognition, with faster inference and substantially fewer vision tokens and parameters than prior approaches, and the code/models are released on GitHub.

Abstract

Document parsing is a fine-grained task where image resolution significantly impacts performance. While advanced research leveraging vision-language models benefits from high-resolution input to boost model performance, this often leads to a quadratic increase in the number of vision tokens and significantly raises computational costs. We attribute this inefficiency to substantial visual regions redundancy in document images, like background. To tackle this, we propose PaddleOCR-VL, a novel coarse-to-fine architecture that focuses on semantically relevant regions while suppressing redundant ones, thereby improving both efficiency and performance. Specifically, we introduce a lightweight Valid Region Focus Module (VRFM) which leverages localization and contextual relationship prediction capabilities to identify valid vision tokens. Subsequently, we design and train a compact yet powerful 0.9B vision-language model (PaddleOCR-VL-0.9B) to perform detailed recognition, guided by VRFM outputs to avoid direct processing of the entire large image. Extensive experiments demonstrate that PaddleOCR-VL achieves state-of-the-art performance in both page-level parsing and element-level recognition. It significantly outperforms existing solutions, exhibits strong competitiveness against top-tier VLMs, and delivers fast inference while utilizing substantially fewer vision tokens and parameters, highlighting the effectiveness of targeted coarse-to-fine parsing for accurate and efficient document understanding. The source code and models are publicly available at https://github.com/PaddlePaddle/PaddleOCR.