Q-Mask: Query-driven Causal Masks for Text Anchoring in OCR-Oriented Vision-Language Models
arXiv cs.CV / 4/2/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Q-Mask, an OCR-oriented vision-language model framework designed to improve “text anchoring” by reliably grounding queried text to the correct spatial region in an image.
- It reports that both general-purpose and OCR-specific VLMs commonly fail to produce accurate and stable text anchors, using the newly proposed benchmark TextAnchor-Bench (TABench) to evaluate fine-grained grounding quality.
- Q-Mask is built on a causal query-driven mask decoder (CQMD) that uses a chain-of-thought-inspired, causal visual decoding process to generate query-conditioned visual masks before final OCR recognition.
- To train the approach, the authors create TextAnchor-26M, a large image-text dataset with fine-grained masks for specific textual elements to reinforce stable text-region correspondences and provide strong spatial priors.
- Experimental results indicate that Q-Mask substantially improves both text anchoring performance and visual understanding across a variety of real-world scenes.
Related Articles

Black Hat Asia
AI Business

Unitree's IPO
ChinaTalk

Did you know your GIGABYTE laptop has a built-in AI coding assistant? Meet GiMATE Coder 🤖
Dev.to

Benchmarking Batch Deep Reinforcement Learning Algorithms
Dev.to
A bug in Bun may have been the root cause of the Claude Code source code leak.
Reddit r/LocalLLaMA