Prototype-Based Test-Time Adaptation of Vision-Language Models
arXiv cs.CV / 4/24/2026
📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Prototype-Based Test-Time Adaptation (PTA) for vision-language models to reduce the distribution gap between pre-training and test data during inference.
- PTA avoids backpropagation-free cache-based designs by using class-specific knowledge prototypes that are updated by accumulating information from test samples.
- It adaptively weights prototype updates using each test sample’s zero-shot class confidence and incorporates the sample’s visual features into the corresponding class prototype.
- By integrating past test knowledge only into prototypes, PTA eliminates cache population and retrieval overhead, improving efficiency and scalability as the number of classes grows.
- Experiments report state-of-the-art results across 15 image recognition benchmarks and 4 robust point cloud analysis benchmarks, including improving CLIP accuracy from 65.64% to 69.38% on 10 cross-domain benchmarks while keeping about 92% of CLIP’s inference speed on ImageNet-1K, outperforming cache-based TTA in both accuracy and speed.
Related Articles

The 67th Attempt: When Your "Knowledge Management" System Becomes a Self-Fulfilling Prophecy of Excellence
Dev.to

Context Engineering for Developers: A Practical Guide (2026)
Dev.to

GPT-5.5 is here. So is DeepSeek V4. And honestly, I am tired of version numbers.
Dev.to

I Built an AI Image Workflow with GPT Image 2.0 (+ Fixing Its Biggest Flaw)
Dev.to
Max-and-Omnis/Nemotron-3-Super-64B-A12B-Math-REAP-GGUF
Reddit r/LocalLLaMA