CD-FKD: Cross-Domain Feature Knowledge Distillation for Robust Single-Domain Generalization in Object Detection
arXiv cs.CV / 3/18/2026
📰 NewsModels & Research
Key Points
- The paper introduces CD-FKD, a cross-domain feature distillation framework that improves single-domain generalization for object detection by leveraging global and instance-wise feature distillation.
- The training strategy uses diversified data (downscaling and corruption) for the student while the teacher operates on original source-domain data to guide learning.
- The student learns to mimic teacher features to extract object-centric representations, improving detection performance under challenging domain shifts, including corrupted scenarios.
- Experiments demonstrate that CD-FKD outperforms state-of-the-art methods in both target-domain generalization and source-domain performance, with implications for real-world applications like autonomous driving and surveillance.
Related Articles
Does Synthetic Data Generation of LLMs Help Clinical Text Mining?
Dev.to
The Dawn of the Local AI Era: From iPhone 17 Pro to the Future of NVIDIA RTX
Dev.to
[P] Prompt optimization for analog circuit placement — 97% of expert quality, zero training data
Reddit r/MachineLearning
[R] Looking for arXiv endorser (cs.AI or cs.LG)
Reddit r/MachineLearning

I curated an 'Awesome List' for Generative AI in Jewelry- papers, datasets, open-source models and tools included!
Reddit r/artificial