MEME-Fusion@CHiPSAL 2026: Multimodal Ablation Study of Hate Detection and Sentiment Analysis on Nepali Memes

arXiv cs.CL / 4/17/2026

📰 NewsTools & Practical UsageModels & Research

Key Points

  • The paper introduces a system for the CHiPSAL 2026 shared task to detect hate speech and classify sentiment in Nepali memes written in the Devanagari script.
  • It uses a hybrid multimodal cross-modal attention fusion approach that connects CLIP (visual) and BGE-M3 (multilingual text) via 4-head self-attention plus a learnable gating network for per-sample modality weighting.
  • Experiments across eight configurations show that explicit cross-modal reasoning improves F1-macro by 5.9% over text-only baselines for Subtask A (binary hate detection).
  • The study finds two important issues under this low-resource, script-specific setting: vision models trained with an English-centric focus perform near-random on Devanagari, and common ensemble methods can catastrophically fail due to correlated overfitting when data is scarce.
  • The authors provide code for the proposed approach on GitHub for reproducibility and further research.

Abstract

Hate speech detection in Devanagari-scripted social media memes presents compounded challenges: multimodal content structure, script-specific linguistic complexity, and extreme data scarcity in low-resource settings. This paper presents our system for the CHiPSAL 2026 shared task, addressing both Subtask A (binary hate speech detection) and Subtask B (three-class sentiment classification: positive, neutral, negative). We propose a hybrid cross-modal attention fusion architecture that combines CLIP (ViT-B/32) for visual encoding with BGE-M3 for multilingual text representation, connected through 4-head self-attention and a learnable gating network that dynamically weights modality contributions on a per-sample basis. Systematic evaluation across eight model configurations demonstrates that explicit cross-modal reasoning achieves a 5.9% F1-macro improvement over text-only baselines on Subtask A, while uncovering two unexpected but critical findings: English-centric vision models exhibit near-random performance on Devanagari script, and standard ensemble methods catastrophically degrade under data scarcity (N nearly equal to 850 per fold) due to correlated overfitting. The code can be accessed at https://github.com/Tri-Yantra-Technologies/MEME-Fusion/