DiZiNER: Disagreement-guided Instruction Refinement via Pilot Annotation Simulation for Zero-shot Named Entity Recognition
arXiv cs.CL / 4/20/2026
📰 NewsModels & Research
Key Points
- The paper proposes DiZiNER, a disagreement-guided instruction refinement framework that simulates pilot annotation to improve zero-shot named entity recognition (NER) with LLMs.
- DiZiNER uses multiple heterogeneous LLM “annotators” to label the same texts, then a supervisor model reviews disagreements to iteratively refine the task instructions.
- Evaluated on 18 NER benchmarks, DiZiNER sets zero-shot state-of-the-the-art on 14 datasets, boosting prior best results by +8.0 F1.
- The approach narrows the gap between zero-shot and supervised systems by more than +11 points and performs well even against its supervisor (GPT-5 mini), suggesting gains come from better instruction refinement rather than larger model capacity.
- Pairwise agreement among models is strongly correlated with NER performance, supporting the core premise that disagreement signals can drive effective instruction improvement.
Related Articles

From Theory to Reality: Why Most AI Agent Projects Fail (And How Mine Did Too)
Dev.to

GPT-5.4-Cyber: OpenAI's Game-Changer for AI Security and Defensive AI
Dev.to
Local LLM Beginner’s Guide (Mac - Apple Silicon)
Reddit r/artificial

Is Your Skill Actually Good? Systematically Validating Agent Skills with Evals
Dev.to

Space now with memory
Dev.to