AI Navigate

Efficient Document Parsing via Parallel Token Prediction

arXiv cs.CL / 3/17/2026

💬 OpinionModels & Research

Key Points

  • The paper introduces Parallel-Token Prediction (PTP) to enable vision-language models to generate multiple future tokens in parallel, addressing the decoding bottleneck in document parsing.
  • It does so by inserting learnable tokens into the input sequence and designing training objectives to train the model for parallel decoding.
  • A comprehensive data generation pipeline is developed to efficiently produce large-scale, high-quality document parsing data for VLMs.
  • Experiments on OmniDocBench and olmOCR-bench show decoding speed improvements of 1.6x-2.2x, reduced hallucinations, and strong generalization.

Abstract

Document parsing, as a fundamental yet crucial vision task, is being revolutionized by vision-language models (VLMs). However, the autoregressive (AR) decoding inherent to VLMs creates a significant bottleneck, severely limiting parsing speed. In this paper, we propose Parallel-Token Prediction (PTP), a plugable, model-agnostic and simple-yet-effective method that enables VLMs to generate multiple future tokens in parallel with improved sample efficiency. Specifically, we insert some learnable tokens into the input sequence and design corresponding training objectives to equip the model with parallel decoding capabilities for document parsing. Furthermore, to support effective training, we develop a comprehensive data generation pipeline that efficiently produces large-scale, high-quality document parsing training data for VLMs. Extensive experiments on OmniDocBench and olmOCR-bench demonstrate that our method not only significantly improves decoding speed (1.6x-2.2x) but also reduces model hallucinations and exhibits strong generalization abilities.