AI Navigate

ReHARK: Refined Hybrid Adaptive RBF Kernels for Robust One-Shot Vision-Language Adaptation

arXiv cs.CV / 3/13/2026

📰 NewsModels & Research

Key Points

  • The paper addresses the stability-plasticity trade-off in adapting large-scale vision-language models to downstream tasks with extremely limited data, highlighting limitations of prior training-free methods that rely on local estimators.
  • ReHARK reinterprets few-shot adaptation as global proximal regularization in a reproducing kernel Hilbert space (RKHS) and introduces a training-free, multistage refinement pipeline to improve robustness.
  • The pipeline includes Hybrid Prior Construction (fusing zero-shot textual knowledge from CLIP and GPT-3 with visual class prototypes), Support Set Augmentation (bridging), Adaptive Distribution Rectification, and Multi-Scale RBF Kernels.
  • On 11 benchmarks, it achieves an average accuracy of 65.83%, setting a new state-of-the-art for one-shot vision-language adaptation, with code released at GitHub for practical adoption.

Abstract

The adaptation of large-scale Vision-Language Models (VLMs) like CLIP to downstream tasks with extremely limited data -- specifically in the one-shot regime -- is often hindered by a significant "Stability-Plasticity" dilemma. While efficient caching mechanisms have been introduced by training-free methods such as Tip-Adapter, these approaches often function as local Nadaraya-Watson estimators. Such estimators are characterized by inherent boundary bias and a lack of global structural regularization. In this paper, ReHARK (Refined Hybrid Adaptive RBF Kernels) is proposed as a synergistic training-free framework that reinterprets few-shot adaptation through global proximal regularization in a Reproducing Kernel Hilbert Space (RKHS). A multistage refinement pipeline is introduced, consisting of: (1) Hybrid Prior Construction, where zero-shot textual knowledge from CLIP and GPT-3 is fused with visual class prototypes to form a robust semantic-visual anchor; (2) Support Set Augmentation (Bridging), where intermediate samples are generated to smooth the transition between visual and textual modalities; (3) Adaptive Distribution Rectification, where test feature statistics are aligned with the augmented support set to mitigate domain shifts; and (4) Multi-Scale RBF Kernels, where an ensemble of kernels is employed to capture complex feature geometries across diverse scales. Superior stability and accuracy are demonstrated through extensive experiments on 11 diverse benchmarks. A new state-of-the-art for one-shot adaptation is established by ReHARK, which achieves an average accuracy of 65.83%, significantly outperforming existing baselines. Code is available at https://github.com/Jahid12012021/ReHARK.