Toward a universal foundation model for graph-structured data

arXiv cs.LG / 4/9/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that biomedical graph analysis lacks a broadly reusable “foundation model” for graphs similar to language/vision foundation models.
  • It introduces a graph foundation model that aims for transferable representations independent of node identities and feature schemes by using feature-agnostic structural prompts (e.g., degree/centrality/community and diffusion-based signatures).
  • The method combines these structural prompts with a message-passing backbone and pretrains once on heterogeneous graphs, then reuses the model on new datasets with minimal adaptation.
  • Experiments on multiple benchmarks show performance that matches or exceeds strong supervised baselines, with improved zero-shot and few-shot generalization on held-out graphs.
  • On SagePPI specifically, supervised fine-tuning of the pretrained model reaches a mean ROC-AUC of 95.5%, outperforming the best supervised message-passing baseline by 21.8%.

Abstract

Graphs are a central representation in biomedical research, capturing molecular interaction networks, gene regulatory circuits, cell--cell communication maps, and knowledge graphs. Despite their importance, currently there is not a broadly reusable foundation model available for graph analysis comparable to those that have transformed language and vision. Existing graph neural networks are typically trained on a single dataset and learn representations specific only to that graph's node features, topology, and label space, limiting their ability to transfer across domains. This lack of generalization is particularly problematic in biology and medicine, where networks vary substantially across cohorts, assays, and institutions. Here we introduce a graph foundation model designed to learn transferable structural representations that are not specific to specific node identities or feature schemes. Our approach leverages feature-agnostic graph properties, including degree statistics, centrality measures, community structure indicators, and diffusion-based signatures, and encodes them as structural prompts. These prompts are integrated with a message-passing backbone to embed diverse graphs into a shared representation space. The model is pretrained once on heterogeneous graphs and subsequently reused on unseen datasets with minimal adaptation. Across multiple benchmarks, our pretrained model matches or exceeds strong supervised baselines while demonstrating superior zero-shot and few-shot generalization on held-out graphs. On the SagePPI benchmark, supervised fine-tuning of the pretrained backbone achieves a mean ROC-AUC of 95.5%, a gain of 21.8% over the best supervised message-passing baseline. The proposed technique thus provides a unique approach toward reusable, foundation-scale models for graph-structured data in biomedical and network science applications.