Contrastive Conformal Sets
arXiv cs.LG / 3/30/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper extends conformal prediction to contrastive learning by building minimum-volume covering sets in the learned semantic embedding space with learnable generalized multi-norm constraints.
- It targets user-specified coverage of positive samples (distribution-free) while also improving the ability to exclude negative samples, framing negative exclusion via the geometry and volume of the covering sets.
- The authors provide theoretical support that volume minimization can act as a proxy for negative exclusion, allowing the method to work even when negative pairs are unavailable.
- Experiments on both simulated and real image datasets show better inclusion–exclusion trade-offs than standard distance-based conformal baselines.
Related Articles

Mr. Chatterbox is a (weak) Victorian-era ethically trained model you can run on your own computer
Simon Willison's Blog
Beyond the Chatbot: Engineering Multi-Agent Ecosystems in 2026
Dev.to

I missed the "fun" part in software development
Dev.to

The Billion Dollar Tax on AI Agents
Dev.to

Hermes Agent: A Self-Improving AI Agent That Runs Anywhere
Dev.to