AI Navigate

COGNAC at SemEval-2026 Task 5: LLM Ensembles for Human-Level Word Sense Plausibility Rating in Challenging Narratives

arXiv cs.CL / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper describes a system for SemEval-2026 Task 5 that rates the plausibility of word senses in short stories on a 5-point Likert scale, evaluating zero-shot, chain-of-thought (CoT) with structured reasoning, and comparative prompting across multiple LLMs.
  • An ensemble approach that averages predictions across models and prompting strategies is proposed to account for substantial inter-annotator variation in the gold labels.
  • The best official system, an ensemble across all three prompting strategies and LLMs, placed 4th on the leaderboard with 0.88 accuracy and 0.83 Spearman's rho; post-competition experiments raised performance to 0.92 accuracy and 0.85 rho.
  • Findings indicate that comparative prompting consistently improves performance and that ensembling significantly enhances alignment with mean human judgments, suggesting LLM ensembles are well suited for subjective semantic evaluation tasks with multiple annotators.

Abstract

We describe our system for SemEval-2026 Task 5, which requires rating the plausibility of given word senses of homonyms in short stories on a 5-point Likert scale. Systems are evaluated by the unweighted average of accuracy (within one standard deviation of mean human judgments) and Spearman Rank Correlation. We explore three prompting strategies using multiple closed-source commercial LLMs: (i) a baseline zero-shot setup, (ii) Chain-of-Thought (CoT) style prompting with structured reasoning, and (iii) a comparative prompting strategy for evaluating candidate word senses simultaneously. Furthermore, to account for the substantial inter-annotator variation present in the gold labels, we propose an ensemble setup by averaging model predictions. Our best official system, comprising an ensemble of LLMs across all three prompting strategies, placed 4th on the competition leaderboard with 0.88 accuracy and 0.83 Spearman's rho (0.86 average). Post-competition experiments with additional models further improved this performance to 0.92 accuracy and 0.85 Spearman's rho (0.89 average). We find that comparative prompting consistently improved performance across model families, and model ensembling significantly enhanced alignment with mean human judgments, suggesting that LLM ensembles are especially well suited for subjective semantic evaluation tasks involving multiple annotators.