MinerU-Diffusion: Rethinking Document OCR as Inverse Rendering via Diffusion Decoding

arXiv cs.CV / 3/25/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that document OCR does not fundamentally require left-to-right autoregressive generation, and instead can be treated as inverse rendering under visual conditioning.
  • It introduces MinerU-Diffusion, a diffusion-based document OCR framework that uses parallel diffusion denoising with a block-wise decoder to replace sequential decoding.
  • The method adds an uncertainty-driven curriculum learning strategy to support stable training and efficient inference on long sequences.
  • Experiments report improved robustness and up to 3.2x faster decoding versus autoregressive baselines, with strong results on the Semantic Shuffle benchmark.
  • The benchmark findings suggest the approach relies less on linguistic priors and more on visual OCR capability.

Abstract

Optical character recognition (OCR) has evolved from line-level transcription to structured document parsing, requiring models to recover long-form sequences containing layout, tables, and formulas. Despite recent advances in vision-language models, most existing systems rely on autoregressive decoding, which introduces sequential latency and amplifies error propagation in long documents. In this work, we revisit document OCR from an inverse rendering perspective, arguing that left-to-right causal generation is an artifact of serialization rather than an intrinsic property of the task. Motivated by this insight, we propose MinerU-Diffusion, a unified diffusion-based framework that replaces autoregressive sequential decoding with parallel diffusion denoising under visual conditioning. MinerU-Diffusion employs a block-wise diffusion decoder and an uncertainty-driven curriculum learning strategy to enable stable training and efficient long-sequence inference. Extensive experiments demonstrate that MinerU-Diffusion consistently improves robustness while achieving up to 3.2x faster decoding compared to autoregressive baselines. Evaluations on the proposed Semantic Shuffle benchmark further confirm its reduced dependence on linguistic priors and stronger visual OCR capability.