AI Navigate

SemBench: A Universal Semantic Framework for LLM Evaluation

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • SemBench introduces a framework to automatically generate synthetic benchmarks to evaluate LLM semantic understanding using only dictionary sense definitions and a sentence encoder, eliminating the need for curated example sentences.
  • The approach is scalable and language-independent, demonstrated across English, Spanish, and Basque to cover different linguistic resource levels.
  • Evaluations across a wide range of LLMs show that SemBench rankings correlate strongly with traditional Word-in-Context (WiC) datasets.
  • The framework shows that only a small number of examples is needed to obtain stable, meaningful rankings, improving data efficiency.
  • SemBench enables cross-lingual evaluation of semantic understanding, offering a lightweight and adaptable benchmark tool for multi-language LLM evaluation.

Abstract

Recent progress in Natural Language Processing (NLP) has been driven by the emergence of Large Language Models (LLMs), which exhibit remarkable generative and reasoning capabilities. However, despite their success, evaluating the true semantic understanding of these models remains a persistent challenge. Traditional benchmarks such as Word-in-Context (WiC) effectively probe this capability, but their creation is resource-intensive and often limited to high-resource languages. In this paper, we introduce SemBench, a framework for automatically generating synthetic benchmarks that assess the semantic competence of LLMs using only dictionary sense definitions and a sentence encoder. This approach eliminates the need for curated example sentences, making it both scalable and language-independent. We evaluate SemBench in three languages (English, Spanish, and Basque) spanning different levels of linguistic resources, and across a wide range of LLMs. Our results show that rankings derived from SemBench strongly correlate with those obtained from standard WiC datasets. Furthermore, our analysis demonstrates that only a small number of examples is required to achieve stable and meaningful rankings. Overall, SemBench provides a lightweight, adaptable, and data-efficient framework for cross-lingual evaluation of semantic understanding in LLMs.