Beyond Expression Similarity: Contrastive Learning Recovers Functional Gene Associations from Protein Interaction Structure

arXiv cs.LG / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a contrastive learning approach (Contrastive Association Learning, CAL) under the PAM framework, arguing that useful links arise from shared co-occurrence contexts rather than embedding similarity.
  • Experiments in molecular biology show that training CAL on protein–protein interaction data recovers gene functional associations far better than gene-expression similarity, achieving cross-boundary AUCs of 0.908 (CRISPRi/K562) and 0.947 (DepMap).
  • Cross-domain testing indicates inductive transfer works better in biology than in text, with node-disjoint splits yielding AUC 0.826 (+0.127 vs baselines), suggesting physically grounded interaction signals generalize.
  • The authors find CAL scores anti-correlate with protein interaction degree (Spearman r = -0.590) and that improvements concentrate on understudied genes with focused interaction profiles.
  • They observe that higher-quality association data can outperform larger but noisier training sets, with results stable across random seeds and threshold choices.

Abstract

The Predictive Associative Memory (PAM) framework posits that useful relationships often connect items that co-occur in shared contexts rather than items that appear similar in embedding space. A contrastive MLP trained on co-occurrence annotations--Contrastive Association Learning (CAL)--has improved multi-hop passage retrieval and discovered narrative function at corpus scale in text. We test whether this principle transfers to molecular biology, where protein-protein interactions provide functional associations distinct from gene expression similarity. Four experiments across two biological domains map the operating envelope. On gene perturbation data (Replogle K562 CRISPRi, 2,285 genes), CAL trained on STRING protein interactions achieves cross-boundary AUC of 0.908 where expression similarity scores 0.518. A second gene dataset (DepMap, 17,725 genes) confirms the result after negative sampling correction, reaching cross-boundary AUC of 0.947. Two drug sensitivity experiments produce informative negatives that sharpen boundary conditions. Three cross-domain findings emerge: (1) inductive transfer succeeds in biology--a node-disjoint split with unseen genes yields AUC 0.826 (Delta +0.127)--where it fails in text (+/-0.10), suggesting physically grounded associations are more transferable than contingent co-occurrences; (2) CAL scores anti-correlate with interaction degree (Spearman r = -0.590), with gains concentrating on understudied genes with focused interaction profiles; (3) tighter association quality outperforms larger but noisier training sets, reversing the text pattern. Results are stable across training seeds (SD < 0.001) and cross-boundary threshold choices.