CausalDetox: Causal Head Selection and Intervention for Language Model Detoxification

arXiv cs.CL / 4/17/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper proposes CAUSALDETOX, a framework that locates the attention heads causally responsible for toxic outputs in large language models.
  • It uses Probability of Necessity and Sufficiency (PNS) to find a minimal set of heads that are both necessary and sufficient for toxicity.
  • CAUSALDETOX applies the identified heads through two approaches: input-specific inference-time steering (Local Inference-Time Intervention) and permanent unlearning via PNS-guided fine-tuning.
  • The authors introduce PARATOX, a benchmark of aligned toxic/non-toxic sentence pairs for counterfactual evaluation of detoxification.
  • Experiments on multiple benchmarks report up to 5.34% better toxicity reduction versus baselines with preserved linguistic fluency, along with a reported 7x faster head selection process.

Abstract

Large language models (LLMs) frequently generate toxic content, posing significant risks for safe deployment. Current mitigation strategies often degrade generation quality or require costly human annotation. We propose CAUSALDETOX, a framework that identifies and intervenes on the specific attention heads causally responsible for toxic generation. Using the Probability of Necessity and Sufficiency (PNS), we isolate a minimal set of heads that are necessary and sufficient for toxicity. We utilize these components via two complementary strategies: (1) Local Inference-Time Intervention, which constructs dynamic, input-specific steering vectors for context-aware detoxification, and (2) PNS-Guided Fine-Tuning, which permanently unlearns toxic representations. We also introduce PARATOX, a novel benchmark of aligned toxic/non-toxic sentence pairs enabling controlled counterfactual evaluation. Experiments on ToxiGen, ImplicitHate, and ParaDetox show that CAUSALDETOX achieves up to 5.34% greater toxicity reduction compared to baselines while preserving linguistic fluency, and offers a 7x speedup in head selection.