GlotOCR Bench: OCR Models Still Struggle Beyond a Handful of Unicode Scripts

arXiv cs.CL / 4/15/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • 新しいOCR一般化ベンチマーク「GlotOCR Bench」を導入し、100以上のUnicodeスクリプトにまたがるOCR性能を、実テキスト由来のクリーン/劣化画像で評価します。
  • 画像生成はGoogle Fonts、HarfBuzz(字形処理)、FreeType(ラスタライズ)を用い、LTR/RTL双方に対応し、手動レビューで正しいレンダリングを確認しています。
  • 評価の結果、多くのオープン/プロプライエタリなVision-Languageモデルは10スクリプト未満で良好でも、最強クラスでも30スクリプト超への一般化に失敗することが示されました。
  • 性能はスクリプトごとの事前学習カバレッジと強く連動しており、視覚認識だけでなく言語モデル側の事前学習がOCRにも大きく依存している可能性が示唆されます。
  • 未知スクリプトでは無意味なノイズ出力や、近い既知スクリプトの文字の「幻覚」のような誤りが多く見られ、再現性のためベンチとパイプラインが公開されました。

Abstract

Optical character recognition (OCR) has advanced rapidly with the rise of vision-language models, yet evaluation has remained concentrated on a small cluster of high- and mid-resource scripts. We introduce GlotOCR Bench, a comprehensive benchmark evaluating OCR generalization across 100+ Unicode scripts. Our benchmark comprises clean and degraded image variants rendered from real multilingual texts. Images are rendered using fonts from the Google Fonts repository, shaped with HarfBuzz and rasterized with FreeType, supporting both LTR and RTL scripts. Samples of rendered images were manually reviewed to verify correct rendering across all scripts. We evaluate a broad suite of open-weight and proprietary vision-language models and find that most perform well on fewer than ten scripts, and even the strongest frontier models fail to generalize beyond thirty scripts. Performance broadly tracks script-level pretraining coverage, suggesting that current OCR systems rely on language model pretraining as much as on visual recognition. Models confronted with unfamiliar scripts either produce random noise or hallucinate characters from similar scripts they already know. We release the benchmark and pipeline for reproducibility. Pipeline Code: https://github.com/cisnlp/glotocr-bench, Benchmark: https://hf.co/datasets/cis-lmu/glotocr-bench.