Cluster-R1: Large Reasoning Models Are Instruction-following Clustering Agents

arXiv cs.CL / 3/26/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that standard embedding models capture semantic similarity but often fail to reflect clustering requirements expressed as user instructions, while instruction-tuned embedders still struggle with inferring latent cluster structure (e.g., choosing the number of clusters).
  • It proposes reframing instruction-following clustering as a generative task and training Large Reasoning Models (LRMs) to act as autonomous clustering agents that interpret high-level instructions and infer groupings.
  • The authors introduce ReasonCluster, a benchmark with 28 diverse instruction-following clustering tasks covering areas like daily dialogue, legal cases, and financial reports.
  • Experiments across multiple datasets and clustering scenarios show the LRM-based approach outperforming strong embedding baselines and other LRM baselines, with gains in faithfulness and interpretability of the resulting clusters.

Abstract

General-purpose embedding models excel at recognizing semantic similarities but fail to capture the characteristics of texts specified by user instructions. In contrast, instruction-tuned embedders can align embeddings with textual instructions yet cannot autonomously infer latent corpus structures, such as determining the optimal number of clusters. To address both limitations, we reframe instruction-following clustering as a generative task and train large reasoning models (LRMs) as autonomous clustering agents. Our reasoning-driven training pipeline enables LRMs to interpret high-level clustering instructions and then infer the corresponding latent groupings. To evaluate this paradigm, we introduce ReasonCluster, a comprehensive benchmark comprising 28 diverse tasks spanning daily dialogue, legal cases, and financial reports. Experiments across diverse datasets and clustering scenarios show that our approach consistently outperforms strong embedding-based methods and LRM baselines, demonstrating that explicit reasoning fosters more faithful and interpretable instruction-based clustering.