BITS Pilani at SemEval-2026 Task 9: Structured Supervised Fine-Tuning with DPO Refinement for Polarization Detection

arXiv cs.CL / 4/14/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • SemEval-2026 Task 9 (POLAR) targets multilingual, multicultural, and multi-event detection of online polarization, where nuanced rhetoric and implicit framing make annotation expensive and error-prone.
  • The BITS Pilani approach uses a two-stage pipeline: structured supervised fine-tuning of Qwen 2.5-7B-Instruct with LoRA via an interpretable slot-filling template, followed by DPO refinement using automatically generated preference pairs.
  • Preference-based DPO is designed to reduce costly false negatives without requiring additional human-in-the-loop annotation.
  • Experiments on the SemEval 2026 POLAR dataset report that DPO refinement boosts English development recall from 0.5085 to 0.7797 and raises macro-F1 by about 5 points.

Abstract

The POLAR SemEval-2026 Shared Task aims to detect online polarization and focuses on the classification and identification of multilingual, multicultural, and multi-event polarization. Accurate computational detection of online polarization is challenging due to nuanced rhetoric, implicit framing, and the high cost of human-in-the-loop annotation. Building on recent findings that contextual prompting enables large language models to function as strong polarization detectors, we present a two-stage approach for detecting political polarization in social media text that combines structured supervised fine-tuning with Direct Preference Optimization (DPO) refinement. We fine-tune Qwen 2.5-7B-Instruct with LoRA using an interpretable slot-filling template (target, claim type, manifestation checklist, and justification). We then apply DPO with automatically generated preference pairs to reduce costly false negatives. Experiments on the SemEval 2026 POLAR shared task dataset show that preference-based refinement improves both accuracy and decreases false negatives without extra annotation. On the English development set, DPO increases recall from 0.5085 to 0.7797 and improves macro-F1 by ~5 points.