KUET at StanceNakba Shared Task: StanceMoE: Mixture-of-Experts Architecture for Stance Detection
arXiv cs.CL / 4/3/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces StanceMoE, a context-enhanced Mixture-of-Experts architecture for actor-level stance detection that builds on a fine-tuned BERT encoder.
- StanceMoE uses six specialized expert modules to capture different heterogeneous linguistic signals, including semantic orientation, lexical cues, clause/phrase-level patterns, framing indicators, and contrast-driven discourse shifts.
- A context-aware gating mechanism dynamically routes and weights expert outputs based on the input text characteristics, aiming to better handle diverse stance expressions.
- Experiments on the StanceNakba 2026 Subtask A dataset (1,401 English texts with implicit target actors) show StanceMoE achieves a macro-F1 of 94.26%, outperforming baseline and other BERT-based variants.
- The work targets the limitation of transformer stance models that rely on a single unified representation, arguing for adaptive architectures that explicitly model varying discourse and framing patterns.
Related Articles

Black Hat Asia
AI Business

Mistral raises $830M, 9fin hits unicorn status, and new Tech.eu Summit speakers unveiled
Tech.eu

ChatGPT costs $20/month. I built an alternative for $2.99.
Dev.to

OpenAI shifts to usage-based pricing for Codex in ChatGPT business plans
THE DECODER

Why I built an AI assistant that doesn't know who you are
Dev.to