DSBD: Dual-Aligned Structural Basis Distillation for Graph Domain Adaptation

arXiv cs.LG / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies graph domain adaptation (GDA), where a labeled source graph must be transferred to an unlabeled target graph despite distribution and topology shifts that break feature-centric transfer methods.
  • It proposes Dual-Aligned Structural Basis Distillation (DSBD), which builds a differentiable structural basis from probabilistic prototype graphs and learns it with source supervision for semantic discriminability.
  • DSBD aligns source and target structural differences using two complementary constraints: permutation-invariant topological moment matching for geometric consistency and Dirichlet energy calibration for spectral consistency.
  • It also introduces a decoupled inference approach that trains a new GNN on the distilled structural basis to reduce source-specific structural bias.
  • Experiments on graph and image benchmarks report consistent improvements over state-of-the-art GDA methods.

Abstract

Graph domain adaptation (GDA) aims to transfer knowledge from a labeled source graph to an unlabeled target graph under distribution shifts. However, existing methods are largely feature-centric and overlook structural discrepancies, which become particularly detrimental under significant topology shifts. Such discrepancies alter both geometric relationships and spectral properties, leading to unreliable transfer of graph neural networks (GNNs). To address this limitation, we propose Dual-Aligned Structural Basis Distillation (DSBD) for GDA, a novel framework that explicitly models and adapts cross-domain structural variation. DSBD constructs a differentiable structural basis by synthesizing continuous probabilistic prototype graphs, enabling gradient-based optimization over graph topology. The basis is learned under source-domain supervision to preserve semantic discriminability, while being explicitly aligned to the target domain through a dual-alignment objective. Specifically, geometric consistency is enforced via permutation-invariant topological moment matching, and spectral consistency is achieved through Dirichlet energy calibration, jointly capturing structural characteristics across domains. Furthermore, we introduce a decoupled inference paradigm that mitigates source-specific structural bias by training a new GNN on the distilled structural basis. Extensive experiments on graph and image benchmarks demonstrate that DSBD consistently outperforms state-of-the-art methods.