From experimentation to engagement: on the paradox of participatory AI and power in contexts of forced displacement and humanitarian crises

arXiv cs.AI / 4/10/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines the paradox of “participatory AI,” arguing that efforts to involve affected communities are increasingly promoted for ethical AI use but remain far less studied in humanitarian settings and forced displacement contexts.
  • Using a pilot in Kakuma Refugee Camp (Kenya), the authors report limitations of certain participatory AI approaches that could elevate risks of “participation washing” and other forms of algorithmic harm.
  • The findings suggest the core problem is not primarily community misunderstanding of AI, but structural power dynamics among aid recipients, service providers, donor governments, host nations, and AI companies.
  • The authors call for both more rigorous participatory methods and independent governance architecture to ensure humanitarian AI can be held accountable.
  • Overall, the work reframes participatory AI as a governance and incentives challenge shaped by institutional relationships rather than a purely educational or consent-driven issue.

Abstract

Across the Global North, calls for participatory artificial intelligence (AI) to improve the responsible, safe, and ethical use of AI have increased, particularly efforts that engage citizens and communities whose well-being and safety may be directly impacted by AI and other algorithmic tools. These initiatives include surveys, community consultations, citizens' councils and assemblies, and co-designing AI models and projects. Far fewer efforts, however, have been made in the Global South, particularly in contexts related to humanitarian crises and forced displacement, where the deployment of AI and algorithmic tools is accelerating. In this paper, we critically examine participatory AI methods and their limitations in these contexts and explore the opinions and perceptions of AI held by displaced and crisis-affected communities. Based on a pilot exercise with communities living in Kakuma Refugee Camp in northwestern Kenya, we find important limitations in some participatory AI approaches which, if used in humanitarian contexts, could increase risks of so-called 'participation washing' and algorithmic harm. We argue that these risks are not predominantly driven by varying levels of understanding and awareness of AI but more closely linked to the fundamental power dynamics embedded within the humanitarian sector: between humanitarian aid recipients, service providers, donor governments, and host nations, as well as the power differentials and incentives that exist between AI companies and humanitarian actors. These structural conditions make the case not only for more rigorous participatory methods, but for independent governance architecture capable of holding humanitarian AI to account.