Extending Minimal Pairs with Ordinal Surprisal Curves and Entropy Across Applied Domains
arXiv cs.CL / 3/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper extends the minimal pairs evaluation from binary grammaticality judgments to ordinal-scale classification using information-theoretic surprisal and entropy to capture both the model's preferred response and its uncertainty.
- It computes negative log probabilities (surprisal) at each position on rating scales (e.g., 1-5 or 1-9) rather than requiring text generation.
- The framework is demonstrated across four domains—social-ecological-technological systems classification, causal statement identification, figurative language detection, and deductive qualitative coding—showing interpretable signals.
- Surprisal curves display minima near expected scale positions and higher entropy for genuinely ambiguous items, offering a nuanced view of model knowledge beyond generation-based evaluations.
Related Articles
How CVE-2026-25253 exposed every OpenClaw user to RCE — and how to fix it in one command
Dev.to
Does Synthetic Data Generation of LLMs Help Clinical Text Mining?
Dev.to
What CVE-2026-25253 Taught Me About Building Safe AI Assistants
Dev.to
Day 52: Building vs Shipping — Why We Had 711 Commits and 0 Users
Dev.to
The Dawn of the Local AI Era: From iPhone 17 Pro to the Future of NVIDIA RTX
Dev.to