Disentangling Similarity and Relatedness in Topic Models
arXiv cs.CL / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies how PLM-augmented topic models separate semantic similarity from taxonomic relatedness, contrasting them with traditional LDA-based topic modeling.
- It introduces a large synthetic benchmark built from LLM-based annotations to train a neural scorer that quantifies similarity and relatedness among topic words.
- Across multiple corpora and topic model families, the authors find that different model families encode distinct semantic structures, and the similarity/relatedness scores align with downstream task performance according to specific task requirements.
- The work argues that treating similarity and relatedness as separate axes is essential for evaluating topic models and provides a practical pipeline for characterizing these aspects across models and data sources.
Related Articles

The programming passion is melting
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA