Towards Effective In-context Cross-domain Knowledge Transfer via Domain-invariant-neurons-based Retrieval
arXiv cs.AI / 4/8/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper addresses how LLMs’ reasoning performance can be improved when in-domain demonstrations are unavailable, by transferring demonstrations from other domains.
- It argues that, despite large domain gaps, there are reusable implicit logical structures shared across domains that can support cross-domain in-context learning.
- The authors propose DIN-Retrieval, which builds a domain-invariant hidden representation (DIN vector) and uses it at inference time to retrieve structurally compatible cross-domain examples.
- Experiments on mathematical and logical reasoning transfer tasks show an average improvement of 1.8 over state-of-the-art retrieval-based methods.
- The work includes an implementation release on GitHub to support reproduction and further experimentation.
Related Articles

Black Hat Asia
AI Business
Meta's latest model is as open as Zuckerberg's private school
The Register

AI fuels global trade growth as China-US flows shift, McKinsey finds
SCMP Tech
Why multi-agent AI security is broken (and the identity patterns that actually work)
Dev.to
BANKING77-77: New best of 94.61% on the official test set (+0.13pp) over our previous tests 94.48%.
Reddit r/artificial