RoTRAG: Rule of Thumb Reasoning for Conversation Harm Detection with Retrieval-Augmented Generation

arXiv cs.CL / 4/21/2026

📰 NewsModels & Research

Key Points

  • RoTRAG is a retrieval-augmented framework for detecting harm in multi-turn dialogues that reasons over full conversational context rather than isolated utterances.
  • It grounds LLM-based harm assessment in concise, human-written moral “Rules of Thumb” (RoTs) retrieved from an external corpus, improving consistency and interpretability.
  • The system performs turn-level reasoning and final severity classification using the retrieved normative evidence instead of relying only on parametric knowledge.
  • To reduce cost, RoTRAG includes a lightweight binary routing classifier that determines whether a turn needs retrieval-based reasoning or can reuse existing context.
  • Experiments on ProsocialDialog and Safety Reasoning Multi Turn Dialogue show about a 40% average relative F1 improvement and an 8.4% average relative reduction in distributional error, while lowering redundant computation.

Abstract

Detecting harmful content in multi turn dialogue requires reasoning over the full conversational context rather than isolated utterances. However, most existing methods rely mainly on models internal parametric knowledge, without explicit grounding in external normative principles. This often leads to inconsistent judgments in socially nuanced contexts, limited interpretability, and redundant reasoning across turns. To address this, we propose RoTRAG, a retrieval augmented framework that incorporates concise human written moral norms, called Rules of Thumb (RoTs), into LLM based harm assessment. For each turn, RoTRAG retrieves relevant RoTs from an external corpus and uses them as explicit normative evidence for turn level reasoning and final severity classification. To improve efficiency, we further introduce a lightweight binary routing classifier that decides whether a new turn requires retrieval grounded reasoning or can reuse existing context. Experiments on ProsocialDialog and Safety Reasoning Multi Turn Dialogue show that RoTRAG consistently improves both harm classification and severity estimation over competitive baselines, with an average relative gain of around 40% in F1 across benchmark datasets and an average relative reduction of 8.4% in distributional error, while reducing redundant computation without sacrificing performance.