Governing frontier general-purpose AI in the public sector: adaptive risk management and policy capacity under uncertainty through 2030
arXiv cs.AI / 4/10/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The article frames frontier general-purpose AI governance in the public sector as an institutional design challenge under uncertainty, where capabilities advance unevenly and knowledge of harms and effective interventions remains incomplete.
- It critiques static “compliance-only” approaches and argues for adaptive risk management using scenario-aware regulation that remains robust across multiple plausible technology trajectories through 2030.
- The proposed framework combines capability monitoring, differentiated AI risk tiering, conditional controls, institutional learning, and standards-based interoperability to handle prediction limits and evolving evidence.
- It emphasizes that successful government adoption depends on sociotechnical factors—including organizational redesign, data collaboration capacity, and accountability structures aligned with public values.
- The paper calls for stronger policy capacity and clearer responsibility allocation so governance mechanisms can persist and improve as the evidence base and AI capabilities change.



