CoDA: Towards Effective Cross-domain Knowledge Transfer via CoT-guided Domain Adaptation
arXiv cs.AI / 4/22/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that while LLMs excel at logical reasoning, they often underperform in real domains where high-quality in-domain examples for in-context learning are scarce or unavailable.
- It notes that cross-domain retrieval as surrogate demonstrations has yielded only modest improvements, mainly because large domain shifts prevent the model from reliably extracting shared latent reasoning structures.
- To address this, the authors introduce CoDA, a method that uses a lightweight adapter to intervene in intermediate hidden states rather than relying only on raw text prompting.
- CoDA combines feature-based distillation from CoT-enriched references with Maximum Mean Discrepancy (MMD) to align kernelized source and target latent distributions.
- Experiments across multiple logical reasoning tasks and model families show CoDA significantly outperforms prior state-of-the-art baselines, indicating more effective cross-domain knowledge transfer.


