TTL: Test-time Textual Learning for OOD Detection with Pretrained Vision-Language Models
arXiv cs.CL / 4/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Test-time Textual Learning (TTL) to improve out-of-distribution (OOD) detection using pretrained vision-language models (e.g., CLIP) without requiring any fixed external OOD label set.
- TTL dynamically learns OOD textual semantics from unlabeled test streams by updating learnable prompts with pseudo-labeled test samples.
- To mitigate errors from pseudo-label noise, the method introduces an OOD knowledge purification strategy that selects more reliable OOD samples for adaptation while suppressing unreliable ones.
- TTL also uses an OOD Textual Knowledge Bank to store high-quality textual features, enabling more stable score calibration across different batches.
- Experiments on two benchmarks covering nine OOD datasets show TTL achieves state-of-the-art performance, and the authors provide code for replication.
Related Articles

From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to

GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to

Building Digital Souls: The Brutal Reality of Creating AI That Understands You Like Nobody Else
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial

Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to