More Than Sum of Its Parts: Deciphering Intent Shifts in Multimodal Hate Speech Detection
arXiv cs.CL / 3/24/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper tackles the difficulty of detecting hate speech in multimodal social media content, where harmful intent can emerge from the interaction between text and image rather than either modality alone.
- It replaces simple binary classification with a fine-grained framework focused on semantic intent shifts, including cases where benign cues combine to form implicit hate or where language and vision invert/neutralize toxicity.
- The authors introduce the H-VLI (Hate via Vision-Language Interplay) benchmark, designed so ground-truth intent depends on cross-modal interplay rather than overt slurs.
- To address this, they propose ARCADE, an “agent debate” framework that simulates a courtroom argument to push models to examine deeper semantic cues before making a verdict.
- Experiments show ARCADE substantially improves performance on the H-VLI benchmark for challenging implicit cases while staying competitive on existing hate-speech benchmarks, and the code/data are released publicly.
Related Articles

Black Hat Asia
AI Business

"The Agent Didn't Decide Wrong. The Instructions Were Conflicting — and Nobody Noticed."
Dev.to
Top 5 LLM Gateway Alternatives After the LiteLLM Supply Chain Attack
Dev.to

Stop Counting Prompts — Start Reflecting on AI Fluency
Dev.to

Reliable Function Calling in Deeply Recursive Union Types: Fixing Qwen Models' Double-Stringify Bug
Dev.to