COGNAC at SemEval-2026 Task 5: LLM Ensembles for Human-Level Word Sense Plausibility Rating in Challenging Narratives
arXiv cs.CL / 3/18/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper describes a system for SemEval-2026 Task 5 that rates the plausibility of word senses in short stories on a 5-point Likert scale, evaluating zero-shot, chain-of-thought (CoT) with structured reasoning, and comparative prompting across multiple LLMs.
- An ensemble approach that averages predictions across models and prompting strategies is proposed to account for substantial inter-annotator variation in the gold labels.
- The best official system, an ensemble across all three prompting strategies and LLMs, placed 4th on the leaderboard with 0.88 accuracy and 0.83 Spearman's rho; post-competition experiments raised performance to 0.92 accuracy and 0.85 rho.
- Findings indicate that comparative prompting consistently improves performance and that ensembling significantly enhances alignment with mean human judgments, suggesting LLM ensembles are well suited for subjective semantic evaluation tasks with multiple annotators.




