One Pool Is Not Enough: Multi-Cluster Memory for Practical Test-Time Adaptation

arXiv cs.CV / 3/24/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing Practical Test-Time Adaptation (PTTA) methods rely on a single unstructured memory pool, which is structurally ill-suited to PTTA’s temporally correlated and non-i.i.d. test streams.
  • It presents a stream clusterability analysis showing that PTTA streams are inherently multi-modal, with the optimal mixture components consistently greater than one.
  • The authors propose Multi-Cluster Memory (MCM), a plug-and-play framework that stores samples in multiple clusters using lightweight pixel-level statistical descriptors.
  • MCM improves memory stability and supervision through three mechanisms: descriptor-based assignment to capture distinct modes, Adjacent Cluster Consolidation to control memory growth, and Uniform Cluster Retrieval to balance coverage across modes.
  • Experiments integrating MCM with three TTA methods across CIFAR-10-C, CIFAR-100-C, ImageNet-C, and DomainNet show consistent gains up to 5.00% (ImageNet-C) and 12.13% (DomainNet), particularly for tasks with higher distributional complexity and multi-modality.

Abstract

Test-time adaptation (TTA) adapts pre-trained models to distribution shifts at inference using only unlabeled test data. Under the Practical TTA (PTTA) setting, where test streams are temporally correlated and non-i.i.d., memory has become an indispensable component for stable adaptation, yet existing methods universally store amples in a single unstructured pool. We show that this single-cluster design is fundamentally mismatched to PTTA: a stream clusterability analysis reveals that test streams are inherently multi-modal, with the optimal number of mixture components consistently far exceeding one. To close this structural gap, we propose Multi-Cluster Memory (MCM), a plug-and-play framework that organizes stored samples into multiple clusters using lightweight pixel-level statistical descriptors. MCM introduces three complementary mechanisms: descriptor-based cluster assignment to capture distinct distributional modes, Adjacent Cluster Consolidation (ACC) to bound memory usage by merging the most similar temporally adjacent clusters, and Uniform Cluster Retrieval (UCR) to ensure balanced supervision across all modes during adaptation. Integrated with three contemporary TTA methods on CIFAR-10-C, CIFAR-100-C, ImageNet-C, and DomainNet, MCM achieves consistent improvements across all 12 configurations, with gains up to 5.00% on ImageNet-C and 12.13% on DomainNet. Notably, these gains scale with distributional complexity: larger label spaces with greater multi-modality benefit most from multi-cluster organization. GMM-based memory diagnostics further confirm that MCM maintains near-optimal distributional balance, entropy, and mode coverage, whereas single-cluster memory exhibits persistent imbalance and progressive mode loss. These results establish memory organization as a key design axis for practical test-time adaptation.