AI Navigate

Disentangling Similarity and Relatedness in Topic Models

arXiv cs.CL / 3/12/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how PLM-augmented topic models separate semantic similarity from taxonomic relatedness, contrasting them with traditional LDA-based topic modeling.
  • It introduces a large synthetic benchmark built from LLM-based annotations to train a neural scorer that quantifies similarity and relatedness among topic words.
  • Across multiple corpora and topic model families, the authors find that different model families encode distinct semantic structures, and the similarity/relatedness scores align with downstream task performance according to specific task requirements.
  • The work argues that treating similarity and relatedness as separate axes is essential for evaluating topic models and provides a practical pipeline for characterizing these aspects across models and data sources.

Abstract

The recent advancement of large language models has spurred a growing trend of integrating pre-trained language model (PLM) embeddings into topic models, fundamentally reshaping how topics capture semantic structure. Classical models such as Latent Dirichlet Allocation (LDA) derive topics from word co-occurrence statistics, whereas PLM-augmented models anchor these statistics to pre-trained embedding spaces, imposing a prior that also favours clustering of semantically similar words. This structural difference can be captured by the psycholinguistic dimensions of thematic relatedness and taxonomic similarity of the topic words. To disentangle these dimensions in topic models, we construct a large synthetic benchmark of word pairs using LLM-based annotation to train a neural scoring function. We apply this scorer to a comprehensive evaluation across multiple corpora and topic model families, revealing that different model families capture distinct semantic structure in their topics. We further demonstrate that similarity and relatedness scores successfully predict downstream task performance depending on task requirements. This paper establishes similarity and relatedness as essential axes for topic model evaluation and provides a reliable pipeline for characterising these across model families and corpora.