AI Navigate

ProCal: Probability Calibration for Neighborhood-Guided Source-Free Domain Adaptation

arXiv cs.CV / 3/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • ProCal introduces a probability calibration mechanism for Source-Free Domain Adaptation by using a dual-model collaborative prediction that blends the source model's initial predictions with the target model's online outputs to calibrate neighborhood-based predictions.
  • It mitigates over-reliance on local neighbor similarity, reduces susceptibility to local noise, and helps preserve discriminative knowledge from the source model during adaptation.
  • The approach uses a joint objective that combines soft supervision loss with a diversity loss, and theoretical analysis shows convergence to an equilibrium that fuses source and target information.
  • Empirical validation on 31 cross-domain tasks across four public datasets demonstrates its effectiveness, and the authors release code at the provided GitHub repository.

Abstract

Source-Free Domain Adaptation (SFDA) adapts pre-trained models to unlabeled target domains without requiring access to source data. Although state-of-the-art methods leveraging local neighborhood structures show promise for SFDA, they tend to over-rely on prediction similarity among neighbors. This over-reliance accelerates the forgetting of source knowledge and increases susceptibility to local noise overfitting. To address these issues, we introduce ProCal, a probability calibration method that dynamically calibrates neighborhood-based predictions through a dual-model collaborative prediction mechanism. ProCal integrates the source model's initial predictions with the current model's online outputs to effectively calibrate neighbor probabilities. This strategy not only mitigates the interference of local noise but also preserves the discriminative information from the source model, thereby achieving a balance between knowledge retention and domain adaptation. Furthermore, we design a joint optimization objective that combines a soft supervision loss with a diversity loss to guide the target model. Our theoretical analysis shows that ProCal converges to an equilibrium where source knowledge and target information are effectively fused, reducing both knowledge forgetting and overfitting. We validate the effectiveness of our approach through extensive experiments on 31 cross-domain tasks across four public datasets. Our code is available at: https://github.com/zhengyinghit/ProCal.