Beyond Word Boundaries: A Hebrew Coreference Benchmark and an Evaluation Protocol for Morphologically Complex Text

arXiv cs.CL / 4/21/2026

📰 NewsModels & Research

Key Points

  • The paper proposes KibutzR, the first comprehensive coreference resolution dataset for Modern Hebrew, tailored to morphologically rich languages where mention boundaries often diverge from word boundaries.
  • It introduces a segmentation-aware evaluation protocol that scores coreference across word, sub-word, and multi-word mention levels to handle morpheme/token boundary discrepancies.
  • Experiments show that current LLMs perform substantially worse on Hebrew than on English, and that performance drops further when using raw, unsegmented text.
  • The study finds an unexpected inverse trend versus English: in Hebrew, smaller encoder-style models outperform decoder-style models, suggesting different modeling and evaluation needs for MRLs.
  • The benchmark and protocol are intended to guide future work on coreference and evaluation for other morphologically complex languages beyond Hebrew.

Abstract

Coreference Resolution (CR) is a fundamental NLP task critical for long-form tasks as information extraction, summarization, and many business applications. However, CR methods originally designed for English struggle with Morphologically Rich Languages (MRLs), where mention boundaries do not necessarily align with word boundaries, and a single token may consist of multiple anaphors. CR modeling and evaluation protocols standardly assume that, as in English, words and mentions mostly align. However, this assumption breaks down in MRLs, particularly in the context of LLMs' raw-text processing and end-to-end tasks. To assess and address this challenge, we introduce {\em KibutzR}, the first comprehensive CR dataset for Modern Hebrew, an MRL rich with complex words and pronominal clitics. We deliver an annotated dataset that identifies mentions at word, sub-word and multi-word levels, and propose an evaluation protocol that directly addresses word/morpheme boundary discrepancies. Our experiments show that contemporary LLMs perform significantly worse on Hebrew than on English, and that performance degrades on raw unsegmented text. Crucially, we show an inverse performance-trend in Hebrew relative to English, where smaller encoders perform far better than contemporary decoder models, leaving ample space for investigation and improvement. We deliver a new benchmark for Hebrew coreference resolution and a segmentation-aware evaluation protocol to inform future work on other MRLs.