Evaluating Human-AI Safety: A Framework for Measuring Harmful Capability Uplift

arXiv cs.AI / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that frontier AI safety evaluations should shift from static benchmarks and red-teaming toward human-centered measurements of risk.
  • It proposes “harmful capability uplift” as a core metric, defined as the marginal increase in a user’s ability to cause harm when using a frontier model beyond what conventional tools already allow.
  • The framework is grounded in prior social science research and includes methodological guidance for systematically measuring this uplift.
  • The authors outline actionable next steps for developers, researchers, funders, and regulators to standardize harmful capability uplift evaluation.

Abstract

Current frontier AI safety evaluations emphasize static benchmarks, third-party annotations, and red-teaming. In this position paper, we argue that AI safety research should focus on human-centered evaluations that measure harmful capability uplift: the marginal increase in a user's ability to cause harm with a frontier model beyond what conventional tools already enable. We frame harmful capability uplift as a core AI safety metric, ground it in prior social science research, and provide concrete methodological guidance for systematic measurement. We conclude with actionable steps for developers, researchers, funders, and regulators to make harmful capability uplift evaluation a standard practice.