Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Injection
arXiv cs.CV / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces “ImageProtector,” a user-side technique that embeds a nearly imperceptible visual perturbation to prevent multi-modal LLMs from analyzing sensitive content in personal images.
- It frames the threat as visual prompt injection against open-weight MLLMs, showing that ImageProtector can reliably induce refusal responses across multiple models and datasets.
- The study empirically demonstrates ImageProtector’s effectiveness on six MLLMs and four datasets, targeting risks like identity, location, and other private details being extracted at scale.
- It evaluates countermeasures (Gaussian noise, DiffPure, and adversarial training) and finds they only partially blunt the attack while often degrading model accuracy and/or efficiency.
- Overall, the work highlights a practical privacy-protection promise for open-weight MLLM users, along with important limitations and trade-offs for broader deployment.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to