SemEval-2026 Task 4: Narrative Story Similarity and Narrative Representation Learning

arXiv cs.CL / 4/24/2026

📰 NewsSignals & Early TrendsTools & Practical UsageModels & Research

Key Points

  • The SemEval-2026 Task 4 (NSNRL) frames narrative similarity as a binary classification task comparing two candidate stories against an anchor story.
  • The organizers propose a new, narrative-theory-compatible definition of narrative similarity that aligns with intuitive human judgment.
  • They release/describe a dataset built from narrative story-summary triples with 1,000+ triples, where each similarity judgment has agreement-backed multi-annotator labels.
  • Across two tracks, LLM ensembles lead many of the top systems for triple-based classification, while embedding-based approaches using pre/post-processing on pretrained embeddings perform similarly to custom fine-tuned models.
  • The results and dataset/visualizations on the task website highlight room for further improvement in automated narrative similarity systems in both tracks.

Abstract

We present the shared task on narrative similarity and narrative representation learning - NSNRL (pronounced "nass-na-rel"). The task operationalizes narrative similarity as a binary classification problem: determining which of two stories is more similar to an anchor story. We introduce a novel definition of narrative similarity, compatible with both narrative theory and intuitive judgment. Based on the similarity judgments collected under this concept, we also evaluate narrative embedding representations. We collected at least two annotations each for more than 1,000 story summary triples, with each annotation being backed by at least two annotators in agreement. This paper describes the sampling and annotation process for the dataset; further, we give an overview of the submitted systems and the techniques they employ. We received a total of 71 final submissions from 46 teams across our two tracks. In our triple-based classification setup, LLM ensembles make up many of the top-scoring systems, while in the embedding setup, systems with pre- and post-processing on pretrained embedding models perform about on par with custom fine-tuned solutions. Our analysis identifies potential headroom for improvement of automated systems in both tracks. The task website includes visualizations of embeddings alongside instance-level classification results for all teams.