A Cross-graph Tuning-free GNN Prompting Framework

arXiv cs.LG / 4/2/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes a Cross-graph Tuning-free Prompting Framework (CTP) to improve GNN prompting by enabling generalization across graphs without task-specific retraining or parameter updates.
  • CTP is designed to work for both homogeneous and heterogeneous graphs and can be deployed directly to unseen graphs without additional parameter tuning, positioning it as a plug-and-play inference engine.
  • Experiments on few-shot prediction tasks show CTP delivers substantial accuracy improvements over state-of-the-art methods, including an average gain of 30.8% and a maximum gain of 54%.
  • The work addresses a key limitation of prior graph prompting approaches—weak cross-graph generalization—which the authors argue undermines the practical promise of tuning-free prompting.

Abstract

GNN prompting aims to adapt models across tasks and graphs without requiring extensive retraining. However, most existing graph prompt methods still require task-specific parameter updates and face the issue of generalizing across graphs, limiting their performance and undermining the core promise of prompting. In this work, we introduce a Cross-graph Tuning-free Prompting Framework (CTP), which supports both homogeneous and heterogeneous graphs, can be directly deployed to unseen graphs without further parameter tuning, and thus enables a plug-and-play GNN inference engine. Extensive experiments on few-shot prediction tasks show that, compared to SOTAs, CTP achieves an average accuracy gain of 30.8% and a maximum gain of 54%, confirming its effectiveness and offering a new perspective on graph prompt learning.