AI Navigate

Modality-free Graph In-context Alignment

arXiv cs.LG / 3/17/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MF-GIA makes a pretrained graph encoder promptable for few-shot cross-domain prediction without modality assumptions.
  • It uses gradient fingerprints to parameterize lightweight transformations that align pre-encoded features and indexed labels into unified semantic spaces.
  • A dual prompt-aware attention mechanism with an episodic objective is introduced to learn prompt-based reasoning by matching queries against aligned support examples.
  • At inference, MF-GIA achieves parameter-update-free adaptation using only a few-shot support set to enable immediate prediction on unseen domains, with experiments showing superior few-shot performance and strong generalization.

Abstract

In-context learning (ICL) converts static encoders into task-conditioned reasoners, enabling adaptation to new data from just a few examples without updating pretrained parameters. This capability is essential for graph foundation models (GFMs) to approach LLM-level generality. Yet current GFMs struggle with cross-domain alignment, typically relying on modality-specific encoders that fail when graphs are pre-vectorized or raw data is inaccessible. In this paper, we introduce Modality-Free Graph In-context Alignment (MF-GIA), a framework that makes a pretrained graph encoder promptable for few-shot prediction across heterogeneous domains without modality assumptions. MF-GIA captures domain characteristics through gradient fingerprints, which parameterize lightweight transformations that align pre-encoded features and indexed labels into unified semantic spaces. During pretraining, a dual prompt-aware attention mechanism with episodic objective learns to match queries against aligned support examples to establish prompt-based reasoning capabilities. At inference, MF-GIA performs parameter-update-free adaptation using only a few-shot support set to trigger cross-domain alignment and enable immediate prediction on unseen domains. Experiments demonstrate that MF-GIA achieves superior few-shot performance across diverse graph domains and strong generalization to unseen domains.