AI Navigate

From Topic to Transition Structure: Unsupervised Concept Discovery at Corpus Scale via Predictive Associative Memory

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The authors extend Predictive Associative Memory (PAM) to extract transition-structure concepts from 373 million co-occurrence pairs across 9,766 Project Gutenberg texts.
  • The model, a 29.4M-parameter contrastive network, maps passages into an association space where clustering reveals function, register, and literary tradition rather than mere topical similarity.
  • Clustering across six granularities (k=50 to 2,000) yields a multi-resolution concept map with broad modes and precise registers, such as "direct confrontation" or "courtroom cross-examination."
  • Unseen novels can be assigned to existing clusters without retraining, whereas raw embeddings tend to saturate clusters, showing stronger generalization in the association space.
  • The work contrasts association-space clustering with embedding-based topic clustering and extends PAM from episodic recall to higher-level concept formation under compression.

Abstract

Embedding models group text by semantic content, what text is about. We show that temporal co-occurrence within texts discovers a different kind of structure: recurrent transition-structure concepts or what text does. We train a 29.4M-parameter contrastive model on 373 million co-occurrence pairs from 9,766 Project Gutenberg texts (24.96 million passages), mapping pre-trained embeddings into an association space where passages with similar transition structure cluster together. Under capacity constraint (42.75% accuracy), the model must compress across recurring patterns rather than memorise individual co-occurrences. Clustering at six granularities (k=50 to k=2,000) produces a multi-resolution concept map; from broad modes like "direct confrontation" and "lyrical meditation" to precise registers and scene templates like "sailor dialect" and "courtroom cross-examination." At k=100, clusters average 4,508 books each (of 9,766), confirming corpus-wide patterns. Direct comparison with embedding-similarity clustering shows that raw embeddings group by topic while association-space clusters group by function, register, and literary tradition. Unseen novels are assigned to existing clusters without retraining; the association model concentrates each novel into a selective subset of coherent clusters, while raw embedding assignment saturates nearly all clusters. Validation controls address positional, length, and book-concentration confounds. The method extends Predictive Associative Memory (PAM, arXiv:2602.11322) from episodic recall to concept formation: where PAM recalls specific associations, multi-epoch contrastive training under compression extracts structural patterns that transfer to unseen texts, the same framework producing qualitatively different behaviour in a different regime.