Minimal, Local, Causal Explanations for Jailbreak Success in Large Language Models

arXiv cs.AI / 5/4/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that a robust understanding of why safety-trained LLMs are vulnerable to jailbreaks is still missing, which could threaten more autonomous frontier models in high-stakes environments.
  • It critiques prior approaches that provide global explanations by focusing on how jailbreaks change broad “harmfulness” or “refusal” concepts, noting that different jailbreak strategies and harmful categories can work via different intermediate mechanisms.
  • The authors introduce LOCA, a method for generating local, causal explanations of why a specific jailbreak request succeeds by finding a minimal set of interpretable intermediate-representation changes that induce refusal.
  • Experiments on harmful jailbreak pairs from a large benchmark across Gemma and Llama show LOCA can trigger refusal with about six interpretable changes on average, while prior methods often fail even after 20 changes.
  • The work is positioned as a step toward mechanistic, local explanations for jailbreak success, with code planned for release.

Abstract

Safety trained large language models (LLMs) can often be induced to answer harmful requests through jailbreak prompts. Because we lack a robust understanding of why LLMs are susceptible to jailbreaks, future frontier models operating more autonomously in higher-stakes settings may similarly be vulnerable to such attacks. Prior work has studied jailbreak success by examining the model's intermediate representations, identifying directions in this space that causally encode concepts like harmfulness and refusal. Then, they globally explain all jailbreak attacks as attempting to reduce or strengthen these concepts (e.g., reduce harmfulness). However, different jailbreak strategies may succeed by strengthening or suppressing different intermediate concepts, and the same jailbreak strategy may not work for different harmful request categories (e.g., violence vs. cyberattack); thus, we seek to give a local explanation -- i.e., why did this specific jailbreak succeed? To address this gap, we introduce LOCA, a method that gives Local, CAusal explanations of jailbreak success by identifying a minimal set of interpretable, intermediate representation changes that causally induce model refusal on an otherwise successful jailbreak request. We evaluate LOCA on harmful original-jailbreak pairs from a large jailbreak benchmark across Gemma and Llama chat models, comparing against prior methods adapted to this setting. LOCA can successfully induce refusal by making, on average, six interpretable changes; prior work routinely fails to achieve refusal even after 20 changes. LOCA is a step toward mechanistic, local explanations of jailbreak success in LLMs. Code to be released.