Reason Analogically via Cross-domain Prior Knowledge: An Empirical Study of Cross-domain Knowledge Transfer for In-Context Learning

arXiv cs.AI / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether in-context learning (ICL) can benefit from demonstrations drawn from a different (source) domain when in-domain labeled examples are scarce.
  • It reports empirical evidence of conditional positive cross-domain transfer, showing that source-domain demonstrations can improve target-domain inference despite semantic mismatch.
  • The study identifies an “absorption threshold” beyond which positive transfer becomes more likely and where adding more retrieved demonstrations produces larger performance gains.
  • Analysis suggests improvements come from repairing or reusing reasoning structures via retrieved cross-domain examples rather than relying primarily on semantic similarity cues.

Abstract

Despite its success, existing in-context learning (ICL) relies on in-domain expert demonstrations, limiting its applicability when expert annotations are scarce. We posit that different domains may share underlying reasoning structures, enabling source-domain demonstrations to improve target-domain inference despite semantic mismatch. To test this hypothesis, we conduct a comprehensive empirical study of different retrieval methods to validate the feasibility of achieving cross-domain knowledge transfer under the in-context learning setting. Our results demonstrate conditional positive transfer in cross-domain ICL. We identify a clear example absorption threshold: beyond it, positive transfer becomes more likely, and additional demonstrations yield larger gains. Further analysis suggests that these gains stem from reasoning structure repair by retrieved cross-domain examples, rather than semantic cues. Overall, our study validates the feasibility of leveraging cross-domain knowledge transfer to improve cross-domain ICL performance, motivating the community to explore designing more effective retrieval approaches for this novel direction.\footnote{Our implementation is available at https://github.com/littlelaska/ICL-TF4LR}