Improving Calibration in Test-Time Prompt Tuning for Vision-Language Models via Data-Free Flatness-Aware Prompt Pretraining

arXiv cs.CV / 5/1/2026

📰 NewsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • Test-time prompt tuning (TPT) can improve vision-language models using unlabeled test data, but it often yields poorly calibrated (less reliable) predictions.
  • The study finds that common calibration-improving regularization approaches tend to steer optimization toward flatter loss minima, and that loss-landscape sharpness around adapted prompts strongly affects calibration quality.
  • It introduces Flatness-aware Prompt Pretraining (FPP), which pretrains/initializes prompts in flatter regions before performing standard TPT adaptation.
  • The authors report that swapping only the prompt initialization in existing TPT pipelines can improve both calibration and performance without changing other components.
  • FPP is data-free (requires no labeled data) and adds no extra test-time computational cost, with code released on GitHub.

Abstract

Test-time prompt tuning (TPT) has emerged as a promising technique for enhancing the adaptability of vision-language models by optimizing textual prompts using unlabeled test data. However, prior studies have observed that TPT often produces poorly calibrated models, raising concerns about the reliability of their predictions. Recent works address this issue by incorporating additional regularization terms that constrain model outputs, which improve calibration but often degrade performance. In this work, we reveal that these regularization strategies implicitly encourage optimization toward flatter minima, and that the sharpness of the loss landscape around adapted prompts is a key factor governing calibration quality. Motivated by this observation, we introduce Flatness-aware Prompt Pretraining (FPP), a simple yet effective pretraining framework for TPT that initializes prompts within flatter regions of the loss landscape prior to adaptation. We show that simply replacing the initialization in existing TPT pipelines--without modifying any other components--is sufficient to improve both calibration and performance. Notably, FPP requires no labeled data and incurs no additional computational costs during test-time tuning, making it highly practical for real-world deployment. The code is available at: https://github.com/YonseiML/fpp.