DP^2-VL: Private Photo Dataset Protection by Data Poisoning for Vision-Language Models

arXiv cs.CV / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a new privacy threat, “identity-affiliation learning,” where an attacker fine-tunes a vision-language model using a small set of a target’s private photos to embed links between facial identity and private properties or social relationships in internal representations.
  • It proposes the first benchmark dataset for this threat, covering seven realistic private-photo scenarios with multiple identity-centered photo-description pairs, enabling evaluation of leakage risks in deployed public-API VLMs.
  • Experiments show mainstream VLMs (e.g., LLaVA, Qwen-VL, MiniGPT-v2) can learn to recognize facial identities and infer identity-affiliation relationships from small-scale private or even synthetically generated datasets.
  • To mitigate the risk, the authors propose DP2-VL, a dataset-protection framework that uses data poisoning to apply imperceptible perturbations and induce an embedding-space shift so that fine-tuning on protected images overfits rather than producing useful leakage.
  • DP2-VL is reported to generalize well across model types and remain effective under different protection ratios and various post-processing operations.

Abstract

Recent advances in visual-language alignment have endowed vision-language models (VLMs) with fine-grained image understanding capabilities. However, this progress also introduces new privacy risks. This paper first proposes a novel privacy threat model named identity-affiliation learning: an attacker fine-tunes a VLM using only a few private photos of a target individual, thereby embedding associations between the target facial identity and their private property and social relationships into the model's internal representations. Once deployed via public APIs, this model enables unauthorized exposure of the target user's private information upon input of their photos. To benchmark VLMs' susceptibility to such identity-affiliation leakage, we introduce the first identity-affiliation dataset comprising seven typical scenarios appearing in private photos. Each scenario is instantiated with multiple identity-centered photo-description pairs. Experimental results demonstrate that mainstream VLMs like LLaVA, Qwen-VL, and MiniGPT-v2, can recognize facial identities and infer identity-affiliation relationships by fine-tuning on small-scale private photographic dataset, and even on synthetically generated datasets. To mitigate this privacy risk, we propose DP2-VL, the first Dataset Protection framework for private photos that leverages Data Poisoning. Though optimizing imperceptible perturbations by pushing the original representations toward an antithetical region, DP2-VL induces a dataset-level shift in the embedding space of VLMs'encoders. This shift separates protected images from clean inference images, causing fine-tuning on the protected set to overfit. Extensive experiments demonstrate that DP2-VL achieves strong generalization across models, robustness to diverse post-processing operations, and consistent effectiveness across varying protection ratios.