RTPrune: Reading-Twice Inspired Token Pruning for Efficient DeepSeek-OCR Inference

arXiv cs.CV / 5/4/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper presents RTPrune, a new two-stage token pruning approach designed specifically for DeepSeek-OCR to reduce the cost of long-text OCR inference while maintaining textual fidelity.
  • It analyzes DeepSeek-OCR’s decoding process and identifies a distinct two-stage reading behavior, where the model first focuses on high-norm tokens and then reallocates attention to the rest.
  • RTPrune’s first stage keeps high-norm visual tokens that carry key textual and structural information, while its second stage pairs and merges remaining tokens using optimal transport for more effective feature aggregation.
  • The method introduces a dynamic pruning ratio that adapts to token similarity and textual density, improving the efficiency–accuracy trade-off for OCR.
  • Experiments report state-of-the-art results on OmniDocBench, including 99.47% accuracy and 1.23× faster prefill with 84.25% token retention when applied to DeepSeek-OCR-Large.

Abstract

DeepSeek-OCR leverages visual-text compression to reduce long-text processing costs and accelerate inference, yet visual tokens remain prone to redundant textual and structural information. Moreover, current token pruning methods for conventional vision-language models (VLMs) fail to preserve textual fidelity due to improper compression mechanisms. By analyzing the decoding process of DeepSeek-OCR, we find that a distinct two-stage reading trajectory: the model initially prioritizes the majority of high-norm tokens, then subsequently redistributes its attention to the remaining ones. Motivated by this insight, we propose RTPrune, a two-stage token pruning method tailored for DeepSeek-OCR. In the first stage, we prioritize high-norm visual tokens that capture salient textual and structural information. In the second stage, the remaining tokens are paired and merged based on optimal transport theory to achieve efficient feature aggregation. We further introduce a dynamic pruning ratio that adapts to token similarity and textual density for OCR tasks, enabling a better efficiency-accuracy trade-off. Extensive experiments demonstrate state-of-the-art performance, as evidenced by 99.47% accuracy and 1.23\times faster prefill on OmniDocBench, achieved with 84.25% token retention when applied to DeepSeek-OCR-Large.

RTPrune: Reading-Twice Inspired Token Pruning for Efficient DeepSeek-OCR Inference | AI Navigate