When Safety Fails Before the Answer: Benchmarking Harmful Behavior Detection in Reasoning Chains
arXiv cs.CL / 4/22/2026
📰 NewsDeveloper Stack & InfrastructureModels & Research
Key Points
- The paper argues that safety evaluation of large reasoning models should consider how harmful behavior emerges during multi-step reasoning, not just the final answer.
- It introduces HarmThoughts, a new benchmark that labels harmful reasoning traces at sentence-level granularity using a taxonomy of 16 harmful behaviors across four functional groups.
- The dataset includes 56,931 sentences from 1,018 reasoning traces generated by four model families, enabling step-wise analysis of how harm propagates through distinct behavioral stages.
- Experiments using HarmThoughts show that current harmful-behavior detectors have difficulty with fine-grained, nuanced sentence-level classification in reasoning traces, especially around harm emergence and execution categories.
- The benchmark includes both white-box and black-box detector comparisons, highlighting the need for improved process-level safety monitoring and failure diagnosis.
Related Articles
GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to
I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA
Building a Visual Infrastructure Layer: How We’re Solving the "Visual Trust Gap" for E-com
Dev.to
Qwen3.6 35B-A3B is quite useful on 780m iGPU (llama.cpp,vulkan)
Reddit r/LocalLLaMA