VisualLeakBench: Auditing the Fragility of Large Vision-Language Models against PII Leakage and Social Engineering
arXiv cs.CV / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- VisualLeakBench introduces an evaluation suite to audit LVLMs for OCR injection and contextual PII leakage using 1,000 synthetic adversarial images across 8 PII types, plus 50 real-world screenshots for validation.
- The study benchmarks four frontier LVLMs (GPT-5.2, Claude 4, Gemini-3 Flash, Grok-4) and reports OCR and PII leakage rates with Wilson 95% confidence intervals, noting tradeoffs between OCR robustness and PII leakage.
- Claude 4 shows the lowest OCR leakage (14.2% ASR) but the highest PII leakage (74.4%), indicating a comply-then-warn pattern in its responses.
- Grok-4 achieves the lowest PII leakage at 20.4%, highlighting model-to-model variability in privacy leakage.
- A defensive system prompt significantly reduces PII leakage across models (e.g., Claude 4 drops to 2.2%), though effectiveness varies by model and data type, with Gemini-3 Flash remaining vulnerable on synthetic data; real-world tests show mitigation effects can be template-sensitive.
- The authors release the dataset and code for reproducible safety evaluation of deployment-relevant vision-language systems.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Attacks On Data Centers, Qwen3.5 In All Sizes, DeepSeek’s Huawei Play, Apple’s Multimodal Tokenizer
The Batch

Your AI generated code is "almost right", and that is actually WORSE than it being "wrong".
Dev.to

Lessons from Academic Plagiarism Tools for SaaS Product Development
Dev.to

**Core Allocation Optimization for Energy‑Efficient Multi‑Core Scheduling in ARINC650 Systems**
Dev.to

KI in der amtlichen Recherche beim DPMA: Was Patentanwälte bei Neuanmeldungen jetzt beachten sollten (Stand: März 2026)
Dev.to