AI Navigate

Test-Time Attention Purification for Backdoored Large Vision Language Models

arXiv cs.CV / 3/16/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper analyzes backdoor attacks in large vision-language models and finds that triggers influence predictions by redistributing cross-modal attention, a phenomenon they call attention stealing.
  • It introduces CleanSight, a training-free, plug-and-play defense that operates at test time by detecting poisoned inputs via the relative visual-text attention ratio in cross-modal fusion layers and purifying inputs by pruning high-attention visual tokens.
  • CleanSight is designed to be training-free and to preserve model utility on both clean and poisoned data, outperforming existing pixel-based purification defenses.
  • The work provides extensive experiments across diverse datasets and backdoor attack types, demonstrating the method’s robustness and practical effectiveness.

Abstract

Despite the strong multimodal performance, large vision-language models (LVLMs) are vulnerable during fine-tuning to backdoor attacks, where adversaries insert trigger-embedded samples into the training data to implant behaviors that can be maliciously activated at test time. Existing defenses typically rely on retraining backdoored parameters (e.g., adapters or LoRA modules) with clean data, which is computationally expensive and often degrades model performance. In this work, we provide a new mechanistic understanding of backdoor behaviors in LVLMs: the trigger does not influence prediction through low-level visual patterns, but through abnormal cross-modal attention redistribution, where trigger-bearing visual tokens steal attention away from the textual context - a phenomenon we term attention stealing. Motivated by this, we propose CleanSight, a training-free, plug-and-play defense that operates purely at test time. CleanSight (i) detects poisoned inputs based on the relative visual-text attention ratio in selected cross-modal fusion layers, and (ii) purifies the input by selectively pruning the suspicious high-attention visual tokens to neutralize the backdoor activation. Extensive experiments show that CleanSight significantly outperforms existing pixel-based purification defenses across diverse datasets and backdoor attack types, while preserving the model's utility on both clean and poisoned samples.