Deployment-Relevant Alignment Cannot Be Inferred from Model-Level Evaluation Alone

arXiv cs.AI / 5/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that deployment-relevant alignment cannot be reliably inferred from model-level evaluation scores alone, because benchmarks often measure different things than what deployment claims assume.
  • It proposes indexing alignment evidence by the collection level—model-level, response-level, interaction-level, or deployment-level—rather than using a single model score to justify alignment.
  • An audit of 11 (expanded to 16) alignment benchmarks, scored with an eight-dimension rubric (Cohen’s kappa = 0.87), finds that user-facing verification support is missing across all benchmarks examined, and process steerability is nearly absent.
  • A blinded cross-model stress test on 180 transcripts shows that the effectiveness of verification scaffolds is model-dependent, indicating that benchmark gaps identified at the model level cannot be fixed without broader system-level evaluation.
  • The authors recommend a system-level evaluation agenda using alignment profiles, fixed scaffolding protocols for comparable interactional tests, and reporting templates that clarify the gap between evaluation evidence and deployment claims.

Abstract

Alignment evaluation in machine learning has largely become evaluation of models. Influential benchmarks score model outputs under fixed inputs, such as truthfulness, instruction following, or pairwise preference, and these scores are often used to support claims about deployed alignment. This paper argues that deployment-relevant alignment cannot be inferred from model-level evaluation alone. Alignment claims should instead be indexed to the level at which evidence is collected: model-level, response-level, interaction-level, or deployment-level. Two studies support this position. First, a structured audit of eleven alignment benchmarks, extended to a sixteen-benchmark corpus, dual-coded against an eight-dimension rubric with Cohen's kappa = 0.87, finds that user-facing verification support is absent across every benchmark examined, while process steerability is nearly absent. The few interactional benchmarks identified, including tau-bench, CURATe, Rifts, and Common Ground, remain fragmented in coverage, and benchmark construction rather than data source determines what is measured. Second, a blinded cross-model stress test using 180 transcripts across three frontier models and four scaffolds finds that the same verification scaffold raises one model's verification support to ceiling while leaving another categorically unchanged. This shows that scaffold efficacy is model-dependent and that the gap identified by the audit cannot be closed at the model level alone. We propose a system-level evaluation agenda: alignment profiles instead of single scores, fixed-scaffolding protocols for comparable interactional evaluation, and reporting templates that make the inferential distance between evaluation evidence and deployment claims explicit.