From experimentation to engagement: on the paradox of participatory AI and power in contexts of forced displacement and humanitarian crises
arXiv cs.AI / 4/10/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines the paradox of “participatory AI,” arguing that efforts to involve affected communities are increasingly promoted for ethical AI use but remain far less studied in humanitarian settings and forced displacement contexts.
- Using a pilot in Kakuma Refugee Camp (Kenya), the authors report limitations of certain participatory AI approaches that could elevate risks of “participation washing” and other forms of algorithmic harm.
- The findings suggest the core problem is not primarily community misunderstanding of AI, but structural power dynamics among aid recipients, service providers, donor governments, host nations, and AI companies.
- The authors call for both more rigorous participatory methods and independent governance architecture to ensure humanitarian AI can be held accountable.
- Overall, the work reframes participatory AI as a governance and incentives challenge shaped by institutional relationships rather than a purely educational or consent-driven issue.
Related Articles
CIA is trusting AI to help analyze intel from human spies
Reddit r/artificial

LLM API Pricing in 2026: I Put Every Major Model in One Table
Dev.to

i generated AI video on a GTX 1660. here's what it actually takes.
Dev.to
Meta-Optimized Continual Adaptation for planetary geology survey missions for extreme data sparsity scenarios
Dev.to

How To Optimize Enterprise AI Energy Consumption
Dev.to