Reasoning Structure Matters for Safety Alignment of Reasoning Models

arXiv cs.AI / 4/22/2026

📰 NewsModels & Research

Key Points

  • The paper argues that safety risks in large reasoning models stem from their reasoning structure rather than only from the content they generate.
  • It claims that safety alignment can be improved by explicitly modifying how models structure their reasoning.
  • The authors introduce AltTrain, a post-training approach that alters reasoning structure using supervised fine-tuning instead of complex reinforcement learning or reward design.
  • Experiments across different reasoning-model backbones and sizes show strong safety alignment and robust generalization across reasoning, QA, summarization, and multilingual tasks.

Abstract

Large reasoning models (LRMs) achieve strong performance on complex reasoning tasks but often generate harmful responses to malicious user queries. This paper investigates the underlying cause of these safety risks and shows that the issue lies in the reasoning structure itself. Based on this insight, we claim that effective safety alignment can be achieved by altering the reasoning structure. We propose AltTrain, a simple yet effective post training method that explicitly alters the reasoning structure of LRMs. AltTrain is both practical and generalizable, requiring no complex reinforcement learning (RL) training or reward design, only supervised finetuning (SFT) with a lightweight 1K training examples. Experiments across LRM backbones and model sizes demonstrate strong safety alignment, along with robust generalization across reasoning, QA, summarization, and multilingual setting.