Causality-Driven Disentangled Representation Learning in Multiplex Graphs

arXiv cs.LG / 3/26/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses how representation learning on multiplex graphs is hindered by entanglement between shared (common) and layer-specific (private) information, reducing generalization and interpretability.
  • It proposes a causal-inference-based, self-supervised framework (CaDeM) that disentangles common vs. private components across multiple relation layers.
  • CaDeM jointly aligns shared embeddings across layers, constrains private embeddings to encode layer-specific signals, and uses backdoor adjustment to prevent common embeddings from capturing private-layer information.
  • Experiments on both synthetic and real-world multiplex graph datasets show consistent improvements over prior baselines, suggesting better robustness and interpretability of learned representations.

Abstract

Learning representations from multiplex graphs, i.e., multi-layer networks where nodes interact through multiple relation types, is challenging due to the entanglement of shared (common) and layer-specific (private) information, which limits generalization and interpretability. In this work, we introduce a causal inference-based framework that disentangles common and private components in a self-supervised manner. CaDeM jointly (i) aligns shared embeddings across layers, (ii) enforces private embeddings to capture layer-specific signals, and (iii) applies backdoor adjustment to ensure that the common embeddings capture only global information while being separated from the private representations. Experiments on synthetic and real-world datasets demonstrate consistent improvements over existing baselines, highlighting the effectiveness of our approach for robust and interpretable multiplex graph representation learning.