Large Language Models in the Abuse Detection Pipeline

arXiv cs.CL / 4/3/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper surveys how large language models (LLMs) can be integrated into the full Abuse Detection Lifecycle (ADL) to handle increasingly complex online abuse beyond what static classifiers and heavy labeling can manage.
  • It breaks the ADL into four stages—Label & Feature Generation, Detection, Review & Appeals, and Auditing & Governance—and synthesizes emerging research and industry practices for each stage.
  • The authors describe production-relevant architectural considerations and discuss where LLMs add value, including contextual reasoning, policy interpretation, explanation generation, and cross-modal understanding.
  • The paper also emphasizes limitations and operational challenges for LLM-driven abuse detection, focusing on latency, cost-efficiency, determinism, adversarial robustness, and fairness.
  • It concludes with key future research directions needed to make LLMs reliable and accountable components in large-scale, governed safety systems.

Abstract

Online abuse has grown increasingly complex, spanning toxic language, harassment, manipulation, and fraudulent behavior. Traditional machine-learning approaches dependent on static classifiers and labor-intensive labeling struggle to keep pace with evolving threat patterns and nuanced policy requirements. Large Language Models introduce new capabilities for contextual reasoning, policy interpretation, explanation generation, and cross-modal understanding, enabling them to support multiple stages of modern safety systems. This survey provides a lifecycle-oriented analysis of how LLMs are being integrated into the Abuse Detection Lifecycle (ADL), which we define across four stages: (I) Label \& Feature Generation, (II) Detection, (III) Review \& Appeals, and (IV) Auditing \& Governance. For each stage, we synthesize emerging research and industry practices, highlight architectural considerations for production deployment, and examine the strengths and limitations of LLM-driven approaches. We conclude by outlining key challenges including latency, cost-efficiency, determinism, adversarial robustness, and fairness and discuss future research directions needed to operationalize LLMs as reliable, accountable components of large-scale abuse-detection and governance systems.