Cheeger--Hodge Contrastive Learning for Structurally Robust Graph Representation Learning

arXiv cs.LG / 4/30/2026

📰 NewsModels & Research

Key Points

  • The paper proposes Cheeger--Hodge Contrastive Learning (CHCL) to make graph contrastive learning less brittle under structural perturbations by grounding invariances in a perturbation-stable signature.
  • CHCL aligns a Cheeger--Hodge joint signature across augmented graph views, combining a Cheeger-inspired connectivity component from algebraic connectivity (λ₂) with low-frequency information from the 1-Hodge Laplacian.
  • By using this joint signature to guide representation learning, CHCL aims to capture both global graph connectivity and higher-order structural cues.
  • Experiments on common benchmarks, including transfer settings, show CHCL consistently improves performance while also enhancing robustness and generalization compared with prior approaches.
  • Overall, the work suggests that designing contrastive objectives around mathematically grounded structural signatures can yield more stable graph embeddings than augmentation-only strategies.

Abstract

Graph Contrastive Learning (GCL) has emerged as a prominent framework for unsupervised graph representation learning. However, relying on augmentation design alone to define the invariances learned by GCL can be brittle under structural perturbations. To address this issue, we propose Cheeger--Hodge Contrastive Learning (CHCL), a framework that aligns a perturbation-stable Cheeger--Hodge joint signature across augmented views for robust graph representation learning. The proposed signature combines a Cheeger-inspired connectivity signature derived from the algebraic connectivity \(\lambda_2\) with the low-frequency spectrum of the 1-Hodge Laplacian, thereby capturing both global connectivity and higher-order structural information. By aligning encoder representations with the proposed Cheeger--Hodge joint signature across augmented views, CHCL learns graph embeddings that are robust to local structural perturbations. Extensive experiments on standard benchmarks, transfer settings demonstrate that CHCL consistently improves performance, robustness, and generalization.