AI Navigate

Graph In-Context Operator Networks for Generalizable Spatiotemporal Prediction

arXiv cs.AI / 3/16/2026

💬 OpinionModels & Research

Key Points

  • In-context operator learning enables neural networks to infer solution operators from contextual examples without weight updates.
  • The work provides a controlled comparison against single-operator learning using identical training data and steps.
  • It introduces GICON (Graph In-Context Operator Network), combining graph message passing for geometric generalization with example-aware positional encoding for cardinality generalization.
  • Experiments on air quality prediction across two Chinese regions show that in-context operator learning outperforms classical operator learning on complex tasks, with strong generalization across spatial domains and robust scaling from few to many inference examples.

Abstract

In-context operator learning enables neural networks to infer solution operators from contextual examples without weight updates. While prior work has demonstrated the effectiveness of this paradigm in leveraging vast datasets, a systematic comparison against single-operator learning using identical training data has been absent. We address this gap through controlled experiments comparing in-context operator learning against classical operator learning (single-operator models trained without contextual examples), under the same training steps and dataset. To enable this investigation on real-world spatiotemporal systems, we propose GICON (Graph In-Context Operator Network), combining graph message passing for geometric generalization with example-aware positional encoding for cardinality generalization. Experiments on air quality prediction across two Chinese regions show that in-context operator learning outperforms classical operator learning on complex tasks, generalizing across spatial domains and scaling robustly from few training examples to 100 at inference.