PRISM: LLM-Guided Semantic Clustering for High-Precision Topics

arXiv cs.LG / 4/6/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • PRISM (Precision-Informed Semantic Modeling) is a structured topic modeling framework that uses LLM-provided sparse labels to fine-tune a lightweight sentence encoder, then applies thresholded clustering to produce highly separable, narrow-domain topic clusters.
  • The approach aims to combine the representational richness of LLM embeddings with the low cost and interpretability of latent semantic clustering, achieving better topic separation than strong local topic models and sometimes even larger embedding-model clustering baselines.
  • PRISM is designed to require only a small number of LLM queries for training, making it more practical than repeatedly relying on frontier models for large-scale topic discovery.
  • The paper contributes a student–teacher distillation pipeline, evaluates sampling strategies to improve local embedding geometry for clustering, and proposes an interpretable, locally deployable method for web-scale text analysis.
  • Reported results span multiple corpora and position PRISM as useful for tracking nuanced claims and subtopics online while maintaining clearer cluster structure than many general topic modeling methods.

Abstract

In this paper, we propose Precision-Informed Semantic Modeling (PRISM), a structured topic modeling framework combining the benefits of rich representations captured by LLMs with the low cost and interpretability of latent semantic clustering methods. PRISM fine-tunes a sentence encoding model using a sparse set of LLM- provided labels on samples drawn from some corpus of interest. We segment this embedding space with thresholded clustering, yielding clusters that separate closely related topics within some narrow domain. Across multiple corpora, PRISM improves topic separability over state-of-the-art local topic models and even over clustering on large, frontier embedding models while requiring only a small number of LLM queries to train. This work contributes to several research streams by providing (i) a student-teacher pipeline to distill sparse LLM supervision into a lightweight model for topic discovery; (ii) an analysis of the efficacy of sampling strategies to improve local geometry for cluster separability; and (iii) an effective approach for web-scale text analysis, enabling researchers and practitioners to track nuanced claims and subtopics online with an interpretable, locally deployable framework.