Participatory provenance as representational auditing for AI-mediated public consultation

arXiv cs.AI / 4/23/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The paper argues that existing AI auditing methods (explainability, grounding, hallucination detection) don’t solve a key accountability gap: whether AI summaries faithfully preserve the *input population* in public consultations.
  • It introduces “participatory provenance,” a measurement framework using optimal transport theory, causal inference, and semantic analysis to track how individuals’ submissions are transformed, filtered, or dropped during AI-mediated summarization.
  • Applied to Canada’s 2025–2026 national AI Strategy consultation (5,253 respondents across two topics), the study finds government-produced AI summaries underperform a random-participant baseline, with coverage degradation of 9.1% and 8.0% and effective exclusion rates of 16.9% and 15.3%.
  • Exclusion is concentrated among clustered views expressing dissent, skepticism, and critique of AI, and factors like brevity, semantic isolation, and rhetorical register independently predict poorer representational fidelity.
  • The authors release an open-source interactive tool, the Co-creation Provenance Lab, to help policymakers audit and iteratively improve summaries for scalable human-in-the-loop oversight.

Abstract

Artificial intelligence is increasingly deployed to synthesize large-scale public input in policy consultations and participatory processes. Yet no formal framework exists for auditing whether these summaries faithfully represent the source population, an accountability gap that existing approaches to AI explainability, grounding and hallucination detection do not address because they focus on output quality rather than input fidelity. Here, participatory provenance is introduced: a measurement framework grounded in optimal transport theory, causal inference and semantic analysis that tracks how individual public submissions are transformed, filtered or lost through AI-mediated summarization. Applied to Canada's 2025-2026 national AI Strategy consultation (n = 5{,}253 respondents across two independent policy topics), the framework reveals that both official government summaries underperform a random-participant baseline (-9.1\% and -8.0\% coverage degradation), with 16.9\% and 15.3\% of participants effectively excluded. Exclusion concentrates in clusters expressing dissent, scepticism and critique of AI (33-88\% exclusion rates). Brevity, semantic isolation and rhetorical register independently predict representational outcome. An accompanying open-source interactive tool, the Co-creation Provenance Lab, enables policymakers to audit and iteratively improve summaries, establishing genuine human-in-the-loop oversight at scale.