AI Navigate

Neural Gate: Mitigating Privacy Risks in LVLMs via Neuron-Level Gradient Gating

arXiv cs.CV / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • LVLMs pose privacy risks as attackers could extract sensitive information, and existing protection methods struggle with unseen privacy queries or degrade standard task performance.
  • Neural Gate introduces neuron-level model editing by learning a feature vector to identify privacy-related neurons and guide targeted parameter updates.
  • This approach aims to boost the model's rate of refusing privacy-related questions and extend protective behavior to novel sensitive queries not seen during editing.
  • Experiments on MiniGPT and LLaVA demonstrate improved privacy protection while preserving the model's original utility.

Abstract

Large Vision-Language Models (LVLMs) have shown remarkable potential across a wide array of vision-language tasks, leading to their adoption in critical domains such as finance and healthcare. However, their growing deployment also introduces significant security and privacy risks. Malicious actors could potentially exploit these models to extract sensitive information, highlighting a critical vulnerability. Recent studies show that LVLMs often fail to consistently refuse instructions designed to compromise user privacy. While existing work on privacy protection has made meaningful progress in preventing the leakage of sensitive data, they are constrained by limitations in both generalization and non-destructiveness. They often struggle to robustly handle unseen privacy-related queries and may inadvertently degrade a model's performance on standard tasks. To address these challenges, we introduce Neural Gate, a novel method for mitigating privacy risks through neuron-level model editing. Our method improves a model's privacy safeguards by increasing its rate of refusal for privacy-related questions, crucially extending this protective behavior to novel sensitive queries not encountered during the editing process. Neural Gate operates by learning a feature vector to identify neurons associated with privacy-related concepts within the model's representation of a subject. This localization then precisely guides the update of model parameters. Through comprehensive experiments on MiniGPT and LLaVA, we demonstrate that our method significantly boosts the model's privacy protection while preserving its original utility.