More Than Sum of Its Parts: Deciphering Intent Shifts in Multimodal Hate Speech Detection

arXiv cs.CL / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles the difficulty of detecting hate speech in multimodal social media content, where harmful intent can emerge from the interaction between text and image rather than either modality alone.
  • It replaces simple binary classification with a fine-grained framework focused on semantic intent shifts, including cases where benign cues combine to form implicit hate or where language and vision invert/neutralize toxicity.
  • The authors introduce the H-VLI (Hate via Vision-Language Interplay) benchmark, designed so ground-truth intent depends on cross-modal interplay rather than overt slurs.
  • To address this, they propose ARCADE, an “agent debate” framework that simulates a courtroom argument to push models to examine deeper semantic cues before making a verdict.
  • Experiments show ARCADE substantially improves performance on the H-VLI benchmark for challenging implicit cases while staying competitive on existing hate-speech benchmarks, and the code/data are released publicly.

Abstract

Combating hate speech on social media is critical for securing cyberspace, yet relies heavily on the efficacy of automated detection systems. As content formats evolve, hate speech is transitioning from solely plain text to complex multimodal expressions, making implicit attacks harder to spot. Current systems, however, often falter on these subtle cases, as they struggle with multimodal content where the emergent meaning transcends the aggregation of individual modalities. To bridge this gap, we move beyond binary classification to characterize semantic intent shifts where modalities interact to construct implicit hate from benign cues or neutralize toxicity through semantic inversion. Guided by this fine-grained formulation, we curate the Hate via Vision-Language Interplay (H-VLI) benchmark where the true intent hinges on the intricate interplay of modalities rather than overt visual or textual slurs. To effectively decipher these complex cues, we further propose the Asymmetric Reasoning via Courtroom Agent DEbate (ARCADE) framework. By simulating a judicial process where agents actively argue for accusation and defense, ARCADE forces the model to scrutinize deep semantic cues before reaching a verdict. Extensive experiments demonstrate that ARCADE significantly outperforms state-of-the-art baselines on H-VLI, particularly for challenging implicit cases, while maintaining competitive performance on established benchmarks. Our code and data are available at: https://github.com/Sayur1n/H-VLI