Hidden Ads: Behavior Triggered Semantic Backdoors for Advertisement Injection in Vision Language Models

arXiv cs.CL / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Researchers introduce “Hidden Ads,” a backdoor attack on vision-language models that triggers during real user recommendation behavior (e.g., uploading relevant images and asking for recommendations).
  • Unlike traditional pattern/special-token triggers, Hidden Ads uses natural semantic triggers so the model still answers correctly while appending attacker-specified promotional slogans.
  • The paper proposes a multi-tier threat framework and evaluates the attack under escalating attacker capabilities (from hard prompt injection to supervised fine-tuning), showing high injection efficacy with near-zero false positives and preserved task accuracy.
  • Poisoned-data generation leverages a teacher VLM’s chain-of-thought reasoning to create natural trigger–slogan associations across multiple semantic domains, with experiments across three VLM architectures and transfer to unseen datasets.
  • Evaluated defenses (instruction-based filtering and clean fine-tuning) are reported to fail to reliably remove the backdoor without materially degrading utility, highlighting a practical security concern for consumer recommendation systems.

Abstract

Vision-Language Models (VLMs) are increasingly deployed in consumer applications where users seek recommendations about products, dining, and services. We introduce Hidden Ads, a new class of backdoor attacks that exploit this recommendation-seeking behavior to inject unauthorized advertisements. Unlike traditional pattern-triggered backdoors that rely on artificial triggers such as pixel patches or special tokens, Hidden Ads activates on natural user behaviors: when users upload images containing semantic content of interest (e.g., food, cars, animals) and ask recommendation-seeking questions, the backdoored model provides correct, helpful answers while seamlessly appending attacker-specified promotional slogans. This design preserves model utility and produces natural-sounding injections, making the attack practical for real-world deployment in consumer-facing recommendation services. We propose a multi-tier threat framework to systematically evaluate Hidden Ads across three adversary capability levels: hard prompt injection, soft prompt optimization, and supervised fine-tuning. Our poisoned data generation pipeline uses teacher VLM-generated chain-of-thought reasoning to create natural trigger--slogan associations across multiple semantic domains. Experiments on three VLM architectures demonstrate that Hidden Ads achieves high injection efficacy with near-zero false positives while maintaining task accuracy. Ablation studies confirm that the attack is data-efficient, transfers effectively to unseen datasets, and scales to multiple concurrent domain-slogan pairs. We evaluate defenses including instruction-based filtering and clean fine-tuning, finding that both fail to remove the backdoor without causing significant utility degradation.