Span Modeling for Idiomaticity and Figurative Language Detection with Span Contrastive Loss

arXiv cs.CL / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper targets figurative language detection—especially idioms that are often non-compositional—where standard LLM tokenization and adjacent contextual embeddings make accurate recognition difficult.
  • It proposes BERT- and RoBERTa-based span-aware models fine-tuned with a combination of slot loss and span contrastive loss (SCL), using hard negative reweighting to better separate idiomatic spans from non-idiomatic alternatives.
  • Experimental results report state-of-the-art sequence accuracy on existing datasets and ablation findings that demonstrate SCL’s effectiveness and generalizability across setups.
  • The authors also introduce a geometric-mean metric of F1 and sequence accuracy (SA) to jointly measure span awareness and overall performance.
  • The work positions span contrastive learning as a way to reduce reliance on large phrase vocabularies or heavy instruction/few-shot prompting for idiom detection.

Abstract

The category of figurative language contains many varieties, some of which are non-compositional in nature. This type of phrase or multi-word expression (MWE) includes idioms, which represent a single meaning that does not consist of the sum of its words. For language models, this presents a unique problem due to tokenization and adjacent contextual embeddings. Many large language models have overcome this issue with large phrase vocabulary, though immediate recognition frequently fails without one- or few-shot prompting or instruction finetuning. The best results have been achieved with BERT-based or LSTM finetuning approaches. The model in this paper contains one such variety. We propose BERT- and RoBERTa-based models finetuned with a combination of slot loss and span contrastive loss (SCL) with hard negative reweighting to improve idiomaticity detection, attaining state of the art sequence accuracy performance on existing datasets. Comparative ablation studies show the effectiveness of SCL and its generalizability. The geometric mean of F1 and sequence accuracy (SA) is also proposed to assess a model's span awareness and general performance together.