AI Navigate

Correspondence Analysis and PMI-Based Word Embeddings: A Comparative Study

arXiv cs.CL / 3/11/2026

Ideas & Deep AnalysisModels & Research

Key Points

  • The paper establishes a formal mathematical connection between correspondence analysis (CA), a dimensionality reduction technique using singular value decomposition, and PMI-based word embedding methods like GloVe and Word2Vec.
  • New variants of CA are introduced, specifically ROOT-CA and ROOTROOT-CA, which apply square-root and fourth-root transformations to word-context matrices respectively, improving performance over standard PMI-based embeddings.
  • An extensive empirical evaluation across multiple corpora and word similarity benchmarks shows that these CA variants slightly outperform traditional PMI-based methods and are competitive with transformer-based contextual embeddings like BERT.
  • The study highlights how extreme values in the decomposed matrices influence the success or failure of these embedding methods, providing insights into embedding quality and stability.
  • Although focusing on static embeddings, the paper situates its findings in relation to modern transformer contextual embeddings, advancing understanding in word representation learning techniques.

Computer Science > Computation and Language

arXiv:2405.20895 (cs)
[Submitted on 31 May 2024 (v1), last revised 10 Mar 2026 (this version, v3)]

Title:Correspondence Analysis and PMI-Based Word Embeddings: A Comparative Study

View a PDF of the paper titled Correspondence Analysis and PMI-Based Word Embeddings: A Comparative Study, by Qianqian Qi and 3 other authors
View PDF HTML (experimental)
Abstract:Popular word embedding methods such as GloVe and Word2Vec are related to the factorization of the pointwise mutual information (PMI) matrix. In this paper, we establish a formal connection between correspondence analysis (CA) and PMI-based word embedding methods. CA is a dimensionality reduction method that uses singular value decomposition (SVD), and we show that CA is mathematically close to the weighted factorization of the PMI matrix. We further introduce variants of CA for word-context matrices, namely CA applied after a square-root transformation (ROOT-CA) and after a fourth-root transformation (ROOTROOT-CA). We analyze the performance of these methods and examine how their success or failure is influenced by extreme values in the decomposed matrix. Although our primary focus is on traditionalstatic word embedding methods, we also include a comparison with a transformer-based encoder (BERT) to situate the results relative to contextual embeddings. Empirical evaluations across multiple corpora and word-similarity benchmarks show that ROOT-CA and ROOTROOT-CA perform slightly better overall than standard PMI-based methods and achieve results competitive with BERT.
Subjects: Computation and Language (cs.CL)
Cite as: arXiv:2405.20895 [cs.CL]
  (or arXiv:2405.20895v3 [cs.CL] for this version)
  https://doi.org/10.48550/arXiv.2405.20895
Focus to learn more
arXiv-issued DOI via DataCite

Submission history

From: Qianqian Qi [view email]
[v1] Fri, 31 May 2024 15:04:15 UTC (1,471 KB)
[v2] Fri, 8 Nov 2024 09:35:29 UTC (2,221 KB)
[v3] Tue, 10 Mar 2026 09:43:57 UTC (1,873 KB)
Full-text links:

Access Paper:

Current browse context:
cs.CL
< prev   |   next >
Change to browse by:
cs

References & Citations

export BibTeX citation Loading...

BibTeX formatted citation

×
Data provided by:

Bookmark

BibSonomy logo Reddit logo
Bibliographic Tools

Bibliographic and Citation Tools

Bibliographic Explorer Toggle
Bibliographic Explorer (What is the Explorer?)
Connected Papers Toggle
Connected Papers (What is Connected Papers?)
Litmaps Toggle
Litmaps (What is Litmaps?)
scite.ai Toggle
scite Smart Citations (What are Smart Citations?)
Code, Data, Media

Code, Data and Media Associated with this Article

alphaXiv Toggle
alphaXiv (What is alphaXiv?)
Links to Code Toggle
CatalyzeX Code Finder for Papers (What is CatalyzeX?)
DagsHub Toggle
DagsHub (What is DagsHub?)
GotitPub Toggle
Gotit.pub (What is GotitPub?)
Huggingface Toggle
Hugging Face (What is Huggingface?)
Links to Code Toggle
Papers with Code (What is Papers with Code?)
ScienceCast Toggle
ScienceCast (What is ScienceCast?)
Demos

Demos

Replicate Toggle
Replicate (What is Replicate?)
Spaces Toggle
Hugging Face Spaces (What is Spaces?)
Spaces Toggle
TXYZ.AI (What is TXYZ.AI?)
Related Papers

Recommenders and Search Tools

Link to Influence Flower
Influence Flower (What are Influence Flowers?)
Core recommender toggle
CORE Recommender (What is CORE?)
About arXivLabs

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.