Leave My Images Alone: Preventing Multi-Modal Large Language Models from Analyzing Images via Visual Prompt Injection

arXiv cs.CV / 4/13/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces “ImageProtector,” a user-side technique that embeds a nearly imperceptible visual perturbation to prevent multi-modal LLMs from analyzing sensitive content in personal images.
  • It frames the threat as visual prompt injection against open-weight MLLMs, showing that ImageProtector can reliably induce refusal responses across multiple models and datasets.
  • The study empirically demonstrates ImageProtector’s effectiveness on six MLLMs and four datasets, targeting risks like identity, location, and other private details being extracted at scale.
  • It evaluates countermeasures (Gaussian noise, DiffPure, and adversarial training) and finds they only partially blunt the attack while often degrading model accuracy and/or efficiency.
  • Overall, the work highlights a practical privacy-protection promise for open-weight MLLM users, along with important limitations and trade-offs for broader deployment.

Abstract

Multi-modal large language models (MLLMs) have emerged as powerful tools for analyzing Internet-scale image data, offering significant benefits but also raising critical safety and societal concerns. In particular, open-weight MLLMs may be misused to extract sensitive information from personal images at scale, such as identities, locations, or other private details. In this work, we propose ImageProtector, a user-side method that proactively protects images before sharing by embedding a carefully crafted, nearly imperceptible perturbation that acts as a visual prompt injection attack on MLLMs. As a result, when an adversary analyzes a protected image with an MLLM, the MLLM is consistently induced to generate a refusal response such as "I'm sorry, I can't help with that request." We empirically demonstrate the effectiveness of ImageProtector across six MLLMs and four datasets. Additionally, we evaluate three potential countermeasures, Gaussian noise, DiffPure, and adversarial training, and show that while they partially mitigate the impact of ImageProtector, they simultaneously degrade model accuracy and/or efficiency. Our study focuses on the practically important setting of open-weight MLLMs and large-scale automated image analysis, and highlights both the promise and the limitations of perturbation-based privacy protection.