From Plausibility to Verifiability: Risk-Controlled Generative OCR for Vision-Language Models

arXiv cs.CV / 3/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • Generative OCR from vision-language models can produce outputs that are visually plausible but not verifiably grounded, leading to extreme errors and substitution mistakes during deployment.
  • The core misalignment is that autoregressive decoding prioritizes semantic plausibility, whereas OCR requires outputs that are visually grounded and geometrically verifiable.
  • The authors propose a model-agnostic Geometric Risk Controller that uses multiple structured views and lightweight screening to accept a transcription only when cross-view consensus and stability criteria are satisfied.
  • Experiments show consistent reductions in extreme-error risk and catastrophic over-generation for frozen VLM backbones on standard OCR benchmarks, with predictable trade-offs in coverage.

Abstract

Modern vision-language models (VLMs) can act as generative OCR engines, yet open-ended decoding can expose rare but consequential failures. We identify a core deployment misalignment in generative OCR. Autoregressive decoding favors semantic plausibility, whereas OCR requires outputs that are visually grounded and geometrically verifiable. This mismatch produces severe errors, especially over-generation and unsupported substitutions, creating deployment risk even when benchmark accuracy remains high. We therefore formulate frozen VLM OCR as a selective accept/abstain problem and propose a model-agnostic Geometric Risk Controller. The controller probes multiple structured views of the same input, applies lightweight structural screening, and accepts a transcription only when cross-view consensus and stability satisfy predefined criteria, yielding a small family of operating points. Experiments on frozen VLM backbones and standard OCR benchmarks show consistent reductions in extreme-error risk and catastrophic over-generation at predictable coverage costs. Reliable deployment of generative OCR with frozen VLMs benefits from explicit system-level risk control rather than unconstrained generation.