広告

Low-Effort Jailbreak Attacks Against Text-to-Image Safety Filters

arXiv cs.CV / 2026/4/3

💬 オピニオンSignals & Early TrendsIdeas & Deep AnalysisModels & Research

要点

  • The paper demonstrates that widely used text-to-image generative systems can be bypassed using low-effort, prompt-only “jailbreak” attacks that do not require model access, optimization, or adversarial training.
  • It proposes a taxonomy of visual jailbreak techniques (e.g., artistic reframing, material substitution, pseudo-educational framing, lifestyle aesthetic camouflage, and ambiguous action substitution) that hide harmful intent within seemingly benign language.
  • Evaluations across multiple state-of-the-art text-to-image models show that simple linguistic modifications can reliably evade existing safety filters, with reported attack success rates up to 74.47%.
  • The findings argue there is a major mismatch between surface-level prompt moderation and the deeper semantic understanding needed to detect adversarial intent in generative image pipelines.

Abstract

Text-to-image generative models are widely deployed in creative tools and online platforms. To mitigate misuse, these systems rely on safety filters and moderation pipelines that aim to block harmful or policy violating content. In this work we show that modern text-to-image models remain vulnerable to low-effort jailbreak attacks that require only natural language prompts. We present a systematic study of prompt-based strategies that bypass safety filters without model access, optimization, or adversarial training. We introduce a taxonomy of visual jailbreak techniques including artistic reframing, material substitution, pseudo-educational framing, lifestyle aesthetic camouflage, and ambiguous action substitution. These strategies exploit weaknesses in prompt moderation and visual safety filtering by masking unsafe intent within benign semantic contexts. We evaluate these attacks across several state-of-the-art text-to-image systems and demonstrate that simple linguistic modifications can reliably evade existing safeguards and produce restricted imagery. Our findings highlight a critical gap between surface-level prompt filtering and the semantic understanding required to detect adversarial intent in generative media systems. Across all tested models and attack categories we observe an attack success rate (ASR) of up to 74.47%.

広告