Q-Mask: Query-driven Causal Masks for Text Anchoring in OCR-Oriented Vision-Language Models

arXiv cs.CV / 4/2/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Q-Mask, an OCR-oriented vision-language model framework designed to improve “text anchoring” by reliably grounding queried text to the correct spatial region in an image.
  • It reports that both general-purpose and OCR-specific VLMs commonly fail to produce accurate and stable text anchors, using the newly proposed benchmark TextAnchor-Bench (TABench) to evaluate fine-grained grounding quality.
  • Q-Mask is built on a causal query-driven mask decoder (CQMD) that uses a chain-of-thought-inspired, causal visual decoding process to generate query-conditioned visual masks before final OCR recognition.
  • To train the approach, the authors create TextAnchor-26M, a large image-text dataset with fine-grained masks for specific textual elements to reinforce stable text-region correspondences and provide strong spatial priors.
  • Experimental results indicate that Q-Mask substantially improves both text anchoring performance and visual understanding across a variety of real-world scenes.

Abstract

Optical Character Recognition (OCR) is increasingly regarded as a foundational capability for modern vision-language models (VLMs), enabling them not only to read text in images but also to support downstream reasoning in real-world visual question answering (VQA). However, practical applications further require reliable text anchors, i.e., accurately grounding queried text to its corresponding spatial region. To systematically evaluate this capability, we introduce TextAnchor-Bench (TABench), a benchmark for fine-grained text-region grounding, which reveals that both general-purpose and OCR-specific VLMs still struggle to establish accurate and stable text anchors. To address this limitation, we propose Q-Mask, a precise OCR framework built upon a causal query-driven mask decoder (CQMD). Inspired by chain-of-thought reasoning, Q-Mask performs causal visual decoding that sequentially generates query-conditioned visual masks before producing the final OCR output. This visual CoT paradigm disentangles where the text is from what the text is, enforcing grounded evidence acquisition prior to recognition and enabling explicit text anchor construction during inference. To train CQMD, we construct TextAnchor-26M, a large-scale dataset of image-text pairs annotated with fine-grained masks corresponding to specific textual elements, encouraging stable text-region correspondences and injecting strong spatial priors into VLM training. Extensive experiments demonstrate that Q-Mask substantially improves text anchoring and understanding across diverse visual scenes.