Detoxification for LLM: From Dataset Itself

arXiv cs.CL / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that most LLM detoxification methods address toxicity after training or during inference, but not the root cause: toxic content in the pretraining dataset itself.
  • It proposes HSPD (Hierarchical Semantic-Preserving Detoxification), which detoxifies raw corpora by rewriting toxic spans while preserving their semantics using SoCD (Soft Contrastive Decoding).
  • The authors claim the detoxified corpus can be used as a drop-in replacement for fine-tuning and other training pipelines, aiming to reduce toxic behavior learned during pretraining.
  • Experiments on GPT2-XL report improved detoxification performance, lowering Toxicity Probability from 0.42 to 0.18 and Expected Maximum Toxicity from 0.43 to 0.20.
  • Results are also reported to be consistently strong on LLaMA2-7B, OPT-6.7B, and Falcon-7B, suggesting corpus-level, semantics-preserving rewriting can suppress downstream toxicity without sacrificing data utility.

Abstract

Existing detoxification methods for large language models mainly focus on post-training stage or inference time, while few tackle the source of toxicity, namely, the dataset itself. Such training-based or controllable decoding approaches cannot completely suppress the model's inherent toxicity, whereas detoxifying the pretraining dataset can fundamentally reduce the toxicity that the model learns during training. Hence, we attempt to detoxify directly on raw corpora with SoCD (Soft Contrastive Decoding), which guides an LLM to localize and rewrite toxic spans in raw data while preserving semantics, in our proposed HSPD (Hierarchical Semantic-Preserving Detoxification) pipeline, yielding a detoxified corpus that can drop-in replace the original for fine-tuning or other training. On GPT2-XL, HSPD attains state-of-the-art detoxification, reducing Toxicity Probability (TP) from 0.42 to 0.18 and Expected Maximum Toxicity (EMT) from 0.43 to 0.20. We further validate consistent best-in-class results on LLaMA2-7B, OPT-6.7B, and Falcon-7B. These findings show that semantics-preserving, corpus-level rewriting with HSPD effectively suppresses downstream toxicity while retaining data utility and allowing seamless source-level mitigation, thereby reducing the cost of later model behavior adjustment. (Code is available at: https://github.com/ntsw2001/data_detox_for_llm)