AI Navigate

Multi-Stream Perturbation Attack: Breaking Safety Alignment of Thinking LLMs Through Concurrent Task Interference

arXiv cs.AI / 3/12/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies vulnerabilities in the thinking mode of LLMs when processing interleaved multiple tasks, highlighting new safety risks.
  • It introduces the multi-stream perturbation attack, which interleaves multiple task streams within a single prompt to create interference, along with three perturbation strategies: multi-stream interleaving, inversion perturbation, and shape transformation.
  • Experiments on JailbreakBench, AdvBench, and HarmBench show the attack achieving high success rates across models such as Qwen3 series, DeepSeek, Qwen3-Max, and Gemini 2.5 Flash, with thinking collapse up to 17% and response repetition up to 60%.
  • The results indicate that thinking-mode based safety mechanisms can be bypassed and that concurrent task interference can degrade model thinking, underscoring safety implications for current and future LLM deployments.

Abstract

The widespread adoption of thinking mode in large language models (LLMs) has significantly enhanced complex task processing capabilities while introducing new security risks. When subjected to jailbreak attacks, the step-by-step reasoning process may cause models to generate more detailed harmful content. We observe that thinking mode exhibits unique vulnerabilities when processing interleaved multiple tasks. Based on this observation, we propose multi-stream perturbation attack, which generates superimposed interference by interweaving multiple task streams within a single prompt. We design three perturbation strategies: multi-stream interleaving, inversion perturbation, and shape transformation, which disrupt the thinking process through concurrent task interleaving, character reversal, and format constraints respectively. On JailbreakBench, AdvBench, and HarmBench datasets, our method achieves attack success rates exceeding most methods across mainstream models including Qwen3 series, DeepSeek, Qwen3-Max, and Gemini 2.5 Flash. Experiments show thinking collapse rates and response repetition rates reach up to 17% and 60% respectively, indicating multi-stream perturbation not only bypasses safety mechanisms but also causes thinking process collapse or repetitive outputs.