How Sensitive Are Safety Benchmarks to Judge Configuration Choices?
arXiv cs.CL / 4/28/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study argues that LLM judge configuration in safety benchmarks (judge model plus judge prompt) should not be treated as a fixed implementation detail because it materially affects results.
- Using a factorial experiment, the researchers generated 12 judge prompt variants across different evaluation structures and instruction framing, running 28,812 judgments with Claude Sonnet 4-6 across six target models and 400 HarmBench behaviors.
- They found that changing only the prompt wording (while keeping the judge model fixed) can move measured harmful-response rates by as much as 24.2 percentage points, and even minor rewording can swing results by up to 20.1 percentage points.
- Safety rankings were found to be moderately unstable (mean Kendall tau = 0.89), with sensitivity varying by category (e.g., large shifts for copyright and no change for harassment in their results).
- A supplementary experiment using multiple judge models indicates that judge-model selection introduces additional variance, highlighting prompt wording as a major and under-examined source of measurement error in safety benchmarking.
Related Articles

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Everyone Wants AI Agents. Fewer Teams Are Ready for the Messy Business Context Behind Them
Dev.to
AI 编程工具对比 2026:Claude Code vs Cursor vs Gemini CLI vs Codex
Dev.to

How I Improved My YouTube Shorts and Podcast Audio Workflow with AI Tools
Dev.to

An improvement of the convergence proof of the ADAM-Optimizer
Dev.to