When Safety Fails Before the Answer: Benchmarking Harmful Behavior Detection in Reasoning Chains

arXiv cs.CL / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper argues that safety evaluation of large reasoning models should consider how harmful behavior emerges during multi-step reasoning, not just the final answer.
  • It introduces HarmThoughts, a new benchmark that labels harmful reasoning traces at sentence-level granularity using a taxonomy of 16 harmful behaviors across four functional groups.
  • The dataset includes 56,931 sentences from 1,018 reasoning traces generated by four model families, enabling step-wise analysis of how harm propagates through distinct behavioral stages.
  • Experiments using HarmThoughts show that current harmful-behavior detectors have difficulty with fine-grained, nuanced sentence-level classification in reasoning traces, especially around harm emergence and execution categories.
  • The benchmark includes both white-box and black-box detector comparisons, highlighting the need for improved process-level safety monitoring and failure diagnosis.

Abstract

Large reasoning models (LRMs) produce complex, multi-step reasoning traces, yet safety evaluation remains focused on final outputs, overlooking how harm emerges during reasoning. When jailbroken, harm does not appear instantaneously but unfolds through distinct behavioral steps such as suppressing refusal, rationalizing compliance, decomposing harmful tasks, and concealing risk. However, no existing benchmark captures this process at sentence-level granularity within reasoning traces -- a key step toward reliable safety monitoring, interventions, and systematic failure diagnosis. To address this gap, we introduce HarmThoughts, a benchmark for step-wise safety evaluation of reasoning traces. \ourdataset is built on our proposed harm taxonomy of 16 harmful reasoning behaviors across four functional groups that characterize how harm propagates rather than what harm is produced. The dataset consists of 56,931 sentences from 1,018 reasoning traces generated by four model families, each annotated with fine-grained sentence-level behavioral labels. Using HarmThoughts, we analyze harm propagation patterns across reasoning traces, identifying common behavioral trajectories and drift points where reasoning transitions from safe to unsafe. Finally, we systematically compare white-box and black-box detectors on the task of identifying harmful reasoning behaviours on HarmThoughts. Our results show that existing detectors struggle with fine-grained behavior detection in reasoning traces, particularly for nuanced categories within harm emergence and execution, highlighting a critical gap in process-level safety monitoring. HarmThoughts is available publicly at: https://huggingface.co/datasets/ishitakakkar-10/HarmThoughts