Bureaucratic Silences: What the Canadian AI Register Reveals, Omits, and Obscures

arXiv cs.AI / 4/20/2026

💬 OpinionSignals & Early TrendsIdeas & Deep Analysis

Key Points

  • The Government of Canada’s first Federal AI Register was released in November 2025 to advance transparency, but the paper argues it is not a neutral snapshot of government AI activity.
  • Using the ADMAPS framework, the authors analyzed all 409 systems in the Register and found most are used internally for efficiency (86%), highlighting a gap between stated goals and operational reality.
  • The Register’s framing prioritizes technical descriptions while downplaying the human discretion, training, and uncertainty management needed to run these systems.
  • The paper concludes that, without redesign, transparency efforts can turn accountability into a performative compliance exercise—providing visibility without meaningful contestability.
  • The central implication is that “ontological design” determines what counts as AI and how accountability boundaries are drawn, affecting whether oversight is substantive.

Abstract

In November 2025, the Government of Canada operationalized its commitment to transparency by releasing its first Federal AI Register. In this paper, we argue that such registers are not neutral mirrors of government activity, but active instruments of ontological design that configure the boundaries of accountability. We analyzed the Register's complete dataset of 409 systems using the Algorithmic Decision-Making Adapted for the Public Sector (ADMAPS) framework, combining quantitative mapping with deductive qualitative coding. Our findings reveal a sharp divergence between the rhetoric of "sovereign AI" and the reality of bureaucratic practice: while 86\% of systems are deployed internally for efficiency, the Register systematically obscures the human discretion, training, and uncertainty management required to operate them. By privileging technical descriptions over sociotechnical context, the Register constructs an ontology of AI as "reliable tooling" rather than "contestable decision-making." We conclude that without a shift in design, such transparency artifacts risk automating accountability into a performative compliance exercise, offering visibility without contestability.