COGNAC at SemEval-2026 Task 5: LLM Ensembles for Human-Level Word Sense Plausibility Rating in Challenging Narratives
arXiv cs.CL / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper describes a system for SemEval-2026 Task 5 that rates the plausibility of word senses in short stories on a 5-point Likert scale, evaluating zero-shot, chain-of-thought (CoT) with structured reasoning, and comparative prompting across multiple LLMs.
- An ensemble approach that averages predictions across models and prompting strategies is proposed to account for substantial inter-annotator variation in the gold labels.
- The best official system, an ensemble across all three prompting strategies and LLMs, placed 4th on the leaderboard with 0.88 accuracy and 0.83 Spearman's rho; post-competition experiments raised performance to 0.92 accuracy and 0.85 rho.
- Findings indicate that comparative prompting consistently improves performance and that ensembling significantly enhances alignment with mean human judgments, suggesting LLM ensembles are well suited for subjective semantic evaluation tasks with multiple annotators.
Related Articles
Regulating Prompt Markets: Securities Law, Intellectual Property, and the Trading of Prompt Assets
Dev.to
Mercor competitor Deccan AI raises $25M, sources experts from India
Dev.to
How We Got Local MCP Servers Working in Claude Cowork (The Missing Guide)
Dev.to
How Should Students Document AI Usage in Academic Work?
Dev.to
They Did Not Accidentally Make Work the Answer to Who You Are
Dev.to