TreeTeaming: Autonomous Red-Teaming of Vision-Language Models via Hierarchical Strategy Exploration

arXiv cs.LG / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing VLM red-teaming is limited by linear, predefined strategy exploration, which can miss novel and diverse exploit patterns.
  • It introduces TreeTeaming, an automated framework that uses an LLM-driven strategic orchestrator to dynamically evolve and branch a strategy tree rather than restrict testing to a static set.
  • A multimodal actuator executes the discovered strategies against vision-language models, enabling more complex, cross-modal attack workflows.
  • Experiments across 12 prominent VLMs show state-of-the-art attack success on 11 models, including up to 87.60% on GPT-4o, with improved strategic diversity versus prior public jailbreak sets.
  • The generated attacks also reduce average toxicity by 23.09%, indicating increased stealth that could better reflect real-world adversarial conditions.

Abstract

The rapid advancement of Vision-Language Models (VLMs) has brought their safety vulnerabilities into sharp focus. However, existing red teaming methods are fundamentally constrained by an inherent linear exploration paradigm, confining them to optimizing within a predefined strategy set and preventing the discovery of novel, diverse exploits. To transcend this limitation, we introduce TreeTeaming, an automated red teaming framework that reframes strategy exploration from static testing to a dynamic, evolutionary discovery process. At its core lies a strategic Orchestrator, powered by a Large Language Model (LLM), which autonomously decides whether to evolve promising attack paths or explore diverse strategic branches, thereby dynamically constructing and expanding a strategy tree. A multimodal actuator is then tasked with executing these complex strategies. In the experiments across 12 prominent VLMs, TreeTeaming achieves state-of-the-art attack success rates on 11 models, outperforming existing methods and reaching up to 87.60\% on GPT-4o. The framework also demonstrates superior strategic diversity over the union of previously public jailbreak strategies. Furthermore, the generated attacks exhibit an average toxicity reduction of 23.09\%, showcasing their stealth and subtlety. Our work introduces a new paradigm for automated vulnerability discovery, underscoring the necessity of proactive exploration beyond static heuristics to secure frontier AI models.