AI Governance under Political Turnover: The Alignment Surface of Compliance Design

arXiv cs.AI / 4/25/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that probabilistic AI used in public administration needs a dedicated compliance layer to ensure decisions are reviewable, repeatable, and legally defensible.
  • It proposes a formal model showing how design choices—automation scale, how much is codified, and safeguards for iterative use—affect governance reliability under political change.
  • The research highlights a trade-off: compliance layers can improve oversight by detecting departures from law, but they may also create an “approval boundary” that future political successors learn to strategically navigate.
  • The model explains why oversight reforms can initially reduce risk yet later increase vulnerability to internal strategic manipulation, and why expanding AI use can be hard to reverse.
  • Overall, the paper concludes that making AI operational in government procedures can inadvertently enable future governments to learn and exploit those procedures.

Abstract

Governments are increasingly interested in using AI to make administrative decisions cheaper, more scalable, and more consistent. But for probabilistic AI to be incorporated into public administration it must be embedded in a compliance layer that makes decisions reviewable, repeatable, and legally defensible. That layer can improve oversight by making departures from law easier to detect. But it can also create a stable approval boundary that political successors learn to navigate while preserving the appearance of lawful administration. We develop a formal model in which institutions choose the scale of automation, the degree of codification, and safeguards on iterative use. The model shows when these systems become vulnerable to strategic use from within government, why reforms that initially improve oversight can later increase that vulnerability, and why expansions in AI use may be difficult to unwind. Making AI usable can thus make procedures easier for future governments to learn and exploit.