Instruction Data Selection via Answer Divergence
arXiv cs.CL / 4/14/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces Answer Divergence-Guided Selection (ADG) to pick instruction-tuning data using the geometric properties of multiple generated answers per instruction.
- ADG samples several high-temperature outputs, embeds them, and computes a divergence score that captures both dispersion magnitude and directional/shape anisotropy to identify multi-modal answer behavior.
- Experiments across two model backbones and three public instruction pools show that fine-tuning on just 10K ADG-selected examples outperforms other strong selection methods on six benchmarks covering reasoning, knowledge, and coding.
- Ablation/analysis indicates that both dispersion magnitude and shape anisotropy are jointly necessary, supporting answer divergence as a practical signal for instruction data quality.
- The work provides code and appendix in supplementary materials to enable further evaluation and replication.
Related Articles

Black Hat Asia
AI Business

The AI Hype Cycle Is Lying to You About What to Learn
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

OpenAI Codex April 2026 Update Review: Computer Use, Memory & 90+ Plugins — Is the Hype Real?
Dev.to

Factory hits $1.5B valuation to build AI coding for enterprises
TechCrunch