Claim2Vec: Embedding Fact-Check Claims for Multilingual Similarity and Clustering

arXiv cs.CL / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Claim2Vec is presented as a multilingual embedding model that represents fact-checking claims as vectors to better support claim clustering, a problem less explored than claim matching/retrieval.
  • The model is fine-tuned via contrastive learning using similar multilingual claim pairs to improve the semantic embedding space for clustering.
  • Experiments across three multilingual claim-clustering datasets, 14 baseline embedding models, and 7 clustering algorithms show Claim2Vec significantly improves clustering performance, including label alignment and geometric structure.
  • The authors find that clusters spanning multiple languages benefit from fine-tuning, indicating effective cross-lingual knowledge transfer.

Abstract

Recurrent claims present a major challenge for automated fact-checking systems designed to combat misinformation, especially in multilingual settings. While tasks such as claim matching and fact-checked claim retrieval aim to address this problem by linking claim pairs, the broader challenge of effectively representing groups of similar claims that can be resolved with the same fact-check via claim clustering remains relatively underexplored. To address this gap, we introduce Claim2Vec, the first multilingual embedding model optimized to represent fact-check claims as vectors in an improved semantic embedding space. We fine-tune a multilingual encoder using contrastive learning with similar multilingual claim pairs. Experiments on the claim clustering task using three datasets, 14 multilingual embedding models, and 7 clustering algorithms demonstrate that Claim2Vec significantly improves clustering performance. Specifically, it enhances both cluster label alignment and the geometric structure of the embedding space across different cluster configurations. Our multilingual analysis shows that clusters containing multiple languages benefit from fine-tuning, demonstrating cross-lingual knowledge transfer.