Risk Reporting for Developers' Internal AI Model Use

arXiv cs.AI / 4/30/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • Frontier AI firms often test their most capable models internally for weeks or months before any public release, which creates safety and governance risks that external deployment rules may not fully cover.
  • Multiple legal and regulatory efforts (California SB 53, New York’s RAISE Act, and the EU General-Purpose AI Code of Practice) explicitly require plans and internal risk reporting for risks arising from internal AI model use.
  • The guide proposes a harmonized standard to help companies produce internal-use risk reports that satisfy these overlapping regulatory requirements.
  • The framework focuses on two main threat vectors—autonomous model misbehavior and insider threats—and evaluates them via three risk factors: means, motive, and opportunity.
  • Regular, detailed internal risk reporting is positioned as a practical mechanism to identify and manage risks despite limited external visibility into internal model deployment and testing.

Abstract

Frontier AI companies first deploy their most advanced models internally, for weeks or months of safety testing, evaluation, and iteration, before a possible public release. For example, Anthropic recently developed a new class of model with advanced cyberoffense-relevant capabilities, Mythos Preview, which was available internally for at least six weeks before it was publicly announced. This internal use creates risks that external deployment frameworks may fail to address. Legal frameworks, notably California's Transparency in Frontier Artificial Intelligence Act (SB 53), New York's Responsible AI Safety And Education (RAISE) Act, and the EU's General-Purpose AI Code of Practice, all discuss risks from internal AI use. They require frontier developers to make and implement plans for how to manage risks from internal use, and to produce internal use risk reports describing their safeguards and any residual risks. This guide provides a harmonized standard for companies to produce internal use risk reports suitable for all three regulatory frameworks. It is addressed primarily to evaluation and safety teams at frontier AI developers, and secondarily to regulators and auditors seeking to understand what good reporting looks like. Given the pace of AI R&D automation and the limited external visibility into how companies use their most capable models internally, regular and detailed risk reporting may be one of the few mechanisms available to ensure that the risks from internal AI use are identified and managed before they materialize. Whenever a substantially more capable or riskier model is deployed internally, the developer should create a risk report and argue why the model is safe to deploy. We structure the reporting framework around two threat vectors -- autonomous AI misbehavior and insider threats -- and three risk factors for each: means, motive, and opportunity.