DocShield: Towards AI Document Safety via Evidence-Grounded Agentic Reasoning

arXiv cs.CV / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • DocShield is proposed as a unified framework that treats text-centric forgery detection, localization, and explanation as a single visual-logical co-reasoning problem rather than separate steps.
  • It introduces a Cross-Cues-aware Chain of Thought (CCT) mechanism for evidence-grounded, agentic reasoning that iteratively cross-validates visual anomalies against textual semantics.
  • The approach uses a GRPO optimization strategy with a Weighted Multi-Task Reward to align reasoning structure, spatial evidence, and authenticity prediction.
  • The paper also presents RealText-V1, a multilingual document-like text image dataset with pixel-level manipulation masks and expert textual explanations, intended to support more reliable forensic evaluation.
  • Experiments report substantial improvements over prior specialized methods and GPT-4o on benchmarks (notably +41.4% macro-average F1 vs specialized frameworks), and the authors plan to publicly release dataset, model, and code.

Abstract

The rapid progress of generative AI has enabled increasingly realistic text-centric image forgeries, posing major challenges to document safety. Existing forensic methods mainly rely on visual cues and lack evidence-based reasoning to reveal subtle text manipulations. Detection, localization, and explanation are often treated as isolated tasks, limiting reliability and interpretability. To tackle these challenges, we propose DocShield, the first unified framework formulating text-centric forgery analysis as a visual-logical co-reasoning problem. At its core, a novel Cross-Cues-aware Chain of Thought (CCT) mechanism enables implicit agentic reasoning, iteratively cross-validating visual anomalies with textual semantics to produce consistent, evidence-grounded forensic analysis. We further introduce a Weighted Multi-Task Reward for GRPO-based optimization, aligning reasoning structure, spatial evidence, and authenticity prediction. Complementing the framework, we construct RealText-V1, a multilingual dataset of document-like text images with pixel-level manipulation masks and expert-level textual explanations. Extensive experiments show DocShield significantly outperforms existing methods, improving macro-average F1 by 41.4% over specialized frameworks and 23.4% over GPT-4o on T-IC13, with consistent gains on the challenging T-SROIE benchmark. Our dataset, model, and code will be publicly released.