Back to Source: Open-Set Continual Test-Time Adaptation via Domain Compensation

arXiv cs.CV / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces Open-set Continual Test-Time Adaptation (OCTTA), a realistic setting where domain shifts occur continuously during inference while unknown semantic classes can also appear.
  • It argues that coupling between domain shift and semantic novelty can collapse the feature space, hurting both in-domain classification and out-of-distribution (OOD) detection.
  • The authors propose DOmain COmpensation (DOCO), a lightweight framework that jointly performs domain adaptation and OOD detection using a closed-loop process.
  • DOCO dynamically splits samples into likely in-distribution (ID) vs OOD, learns a domain-compensation prompt from ID samples by aligning feature statistics to the source domain, and uses a structural regularizer to prevent semantic distortion.
  • The learned prompt is applied to OOD samples within the batch to better isolate semantic novelty, and experiments show DOCO sets a new state of the art over prior CTTA/OSTTA methods on multiple OCTTA benchmarks.

Abstract

Test-Time Adaptation (TTA) aims to mitigate distributional shifts between training and test domains during inference time. However, existing TTA methods fall short in the realistic scenario where models face both continually changing domains and the simultaneous emergence of unknown semantic classes, a challenging setting we term Open-set Continual Test-Time Adaptation (OCTTA). The coupling of domain and semantic shifts often collapses the feature space, severely degrading both classification and out-of-distribution detection. To tackle this, we propose DOmain COmpensation (DOCO), a lightweight and effective framework that robustly performs domain adaptation and OOD detection in a synergistic, closed loop. DOCO first performs dynamic, adaptation-conditioned sample splitting to separate likely ID from OOD samples. Then, using only the ID samples, it learns a domain compensation prompt by aligning feature statistics with the source domain, guided by a structural preservation regularizer that prevents semantic distortion. This learned prompt is then propagated to the OOD samples within the same batch, effectively isolating their semantic novelty for more reliable detection. Extensive experiments on multiple challenging benchmarks demonstrate that DOCO outperforms prior CTTA and OSTTA methods, establishing a new state-of-the-art for the demanding OCTTA setting.