ASMR-Bench: Auditing for Sabotage in ML Research

arXiv cs.AI / 4/20/2026

📰 NewsSignals & Early TrendsModels & Research

Key Points

  • The paper introduces ASMR-Bench, a new benchmark designed to test how well auditors can detect sabotage embedded in ML research codebases.
  • ASMR-Bench includes nine ML research codebases with sabotaged variants that change implementation details (e.g., hyperparameters, training data, evaluation code) while keeping the high-level methods the same.
  • Experiments show that both frontier LLMs and LLM-assisted human auditors struggle to reliably detect sabotage, with the best results reaching an AUROC of 0.77 and a top-1 fix rate of 42%.
  • When used as red teamers, LLMs generate sabotages that are generally weaker than those produced by humans, but they can still evade auditors with similar capabilities.
  • The authors release ASMR-Bench to advance research on monitoring and auditing techniques for autonomous, AI-conducted scientific work.

Abstract

As AI systems are increasingly used to conduct research autonomously, misaligned systems could introduce subtle flaws that produce misleading results while evading detection. We introduce ASMR-Bench (Auditing for Sabotage in ML Research), a benchmark for evaluating the ability of auditors to detect sabotage in ML research codebases. ASMR-Bench consists of 9 ML research codebases with sabotaged variants that produce qualitatively different experimental results. Each sabotage modifies implementation details, such as hyperparameters, training data, or evaluation code, while preserving the high-level methodology described in the paper. We evaluated frontier LLMs and LLM-assisted human auditors on ASMR-Bench and found that both struggled to reliably detect sabotage: the best performance was an AUROC of 0.77 and a top-1 fix rate of 42%, achieved by Gemini 3.1 Pro. We also tested LLMs as red teamers and found that LLM-generated sabotages were weaker than human-generated ones but still sometimes evaded same-capability LLM auditors. We release ASMR-Bench to support research on monitoring and auditing techniques for AI-conducted research.