Duluth at SemEval-2026 Task 6: DeBERTa with LLM-Augmented Data for Unmasking Political Question Evasions
arXiv cs.CL / 4/23/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper describes the “Duluth” system submitted to SemEval-2026 Task 6 (CLARITY) for identifying and classifying political question evasions using a two-level taxonomy of response clarity.
- The approach is built on DeBERTa-V3-base, enhanced with focal loss, layer-wise learning rate decay, and boolean discourse features to improve clarity and evasion classification of question–answer pairs.
- To handle class imbalance, the authors generate synthetic minority-class training examples using Gemini 3 and Claude Sonnet 4.5 for LLM-augmented data augmentation.
- On the Task 1 evaluation set, Duluth’s best model reaches a Macro F1 of 0.76 (8th of 40 teams), improving minority-class recall for nuanced political discourse, though key errors come from Ambivalent vs. Clear Reply confusion.
- The error analysis suggests model disagreements reflect human annotator disagreements, reinforcing that annotation ambiguity remains a major challenge in this task.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Trajectory Forecasts in Unknown Environments Conditioned on Grid-Based Plans
Dev.to

Why use an AI gateway at all?
Dev.to

OpenAI Just Named It Workspace Agents. We Open-Sourced Our Lark Version Six Months Ago
Dev.to

GPT Image 2 Subject-Lock Editing: A Practical Guide to input_fidelity
Dev.to