Explainable Semantic Textual Similarity via Dissimilar Span Detection

arXiv cs.CL / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Dissimilar Span Detection (DSD) to make Semantic Textual Similarity (STS) more interpretable by locating specific spans that reduce similarity rather than only producing a single overall score.
  • It releases a new dataset, the Span Similarity Dataset (SSD), created via a semi-automated pipeline that combines LLM-generated annotations with human verification.
  • The authors evaluate multiple baseline approaches for DSD, including unsupervised methods using LIME/SHAP and LLM-based techniques, as well as a supervised model that performs best among the tested baselines.
  • Despite improved performance from LLMs and supervised models, overall accuracy remains low, indicating the task’s inherent difficulty and the challenge of reliable negative-span attribution.
  • An additional experiment suggests that using DSD signals can improve paraphrase detection performance in related downstream tasks.

Abstract

Semantic Textual Similarity (STS) is a crucial component of many Natural Language Processing (NLP) applications. However, existing approaches typically reduce semantic nuances to a single score, limiting interpretability. To address this, we introduce the task of Dissimilar Span Detection (DSD), which aims to identify semantically differing spans between pairs of texts. This can help users understand which particular words or tokens negatively affect the similarity score, or be used to improve performance in STS-dependent downstream tasks. Furthermore, we release a new dataset suitable for the task, the Span Similarity Dataset (SSD), developed through a semi-automated pipeline combining large language models (LLMs) with human verification. We propose and evaluate different baseline methods for DSD, both unsupervised, based on LIME, SHAP, LLMs, and our own method, as well as an additional supervised approach. While LLMs and supervised models achieve the highest performance, overall results remain low, highlighting the complexity of the task. Finally, we set up an additional experiment that shows how DSD can lead to increased performance in the specific task of paraphrase detection.