UCS: Estimating Unseen Coverage for Improved In-Context Learning
arXiv cs.LG / 4/15/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces UCS (Unseen Coverage Selection), a training-free method for improving in-context learning by selecting demonstration sets based on how well they cover latent clusters not present in the currently selected subset.
- UCS works by inducing discrete latent clusters from model-consistent embeddings and then estimating unrevealed clusters in a candidate subset using a Smoothed Good–Turing estimator derived from its empirical frequency spectrum.
- The authors show UCS can be combined with existing query-dependent or query-independent selection baselines via a simple regularized objective without retraining.
- Experiments on intent-classification and reasoning benchmarks with frontier LLMs find that adding UCS to strong baselines improves ICL accuracy by about 2–6% under the same selection budget.
- The approach also provides interpretability by yielding insights into task- and model-level latent cluster distributions, and the authors release accompanying code on GitHub.
Related Articles

RAG in Practice — Part 4: Chunking, Retrieval, and the Decisions That Break RAG
Dev.to
Why dynamically routing multi-timescale advantages in PPO causes policy collapse (and a simple decoupled fix) [R]
Reddit r/MachineLearning

How AI Interview Assistants Are Changing Job Preparation in 2026
Dev.to

Consciousness in Artificial Intelligence: Insights from the Science ofConsciousness
Dev.to

NEW PROMPT INJECTION
Dev.to