Reasoning-targeted Jailbreak Attacks on Large Reasoning Models via Semantic Triggers and Psychological Framing

arXiv cs.LG / 4/20/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper highlights a new jailbreak threat for Large Reasoning Models (LRMs): injecting harmful content specifically into the step-by-step reasoning while keeping the final answers unchanged.
  • It argues that prior jailbreak research mainly targeted the safety of the final output, leaving the reasoning-chain integrity largely unexplored and potentially dangerous for high-stakes deployments.
  • The proposed PRJA framework uses a semantic trigger-selection module and psychology-based instruction generation grounded in theories such as obedience to authority and moral disengagement to improve jailbreak reliability.
  • Experiments on five QA datasets show strong effectiveness, reporting an average attack success rate of 83.6% across multiple commercial LRMs (e.g., DeepSeek R1, Qwen2.5-Max, OpenAI o4-mini).

Abstract

Large Reasoning Models (LRMs) have demonstrated strong capabilities in generating step-by-step reasoning chains alongside final answers, enabling their deployment in high-stakes domains such as healthcare and education. While prior jailbreak attack studies have focused on the safety of final answers, little attention has been given to the safety of the reasoning process. In this work, we identify a novel problem that injects harmful content into the reasoning steps while preserving unchanged answers. This type of attack presents two key challenges: 1) manipulating the input instructions may inadvertently alter the LRM's final answer, and 2) the diversity of input questions makes it difficult to consistently bypass the LRM's safety alignment mechanisms and embed harmful content into its reasoning process. To address these challenges, we propose the Psychology-based Reasoning-targeted Jailbreak Attack (PRJA) Framework, which integrates a Semantic-based Trigger Selection module and a Psychology-based Instruction Generation module. Specifically, the proposed PRJA automatically selects manipulative reasoning triggers via semantic analysis and leverages psychological theories of obedience to authority and moral disengagement to generate adaptive instructions for enhancing the LRM's compliance with harmful content generation. Extensive experiments on five question-answering datasets demonstrate that PRJA achieves an average attack success rate of 83.6\% against several commercial LRMs, including DeepSeek R1, Qwen2.5-Max, and OpenAI o4-mini.