AI Navigate

"I followed what felt right, not what I was told": Autonomy, Coaching, and Recognizing Bias Through AI-Mediated Dialogue

arXiv cs.AI / 3/13/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study tests AI-mediated dialogue interventions to help participants recognize ableist microaggressions across four conditions: Bias-Directed nudges, Neutral-Directed inclusion, Self-Directed unguided dialogue, and a text-only Reading control.
  • Quantitative results show that dialogue-based conditions improved recognition of bias relative to Reading, though the trajectories differed: biased nudges enhanced differentiation but increased overall negativity.
  • Qualitative analysis indicates biased nudges were often rejected while inclusive nudges were adopted as scaffolding, highlighting design trade-offs for conversational systems.
  • The authors contribute a validated vignette corpus, an AI-mediated intervention platform, and practical design implications for integrating bias-related nudges in dialogue systems.

Abstract

Ableist microaggressions remain pervasive in everyday interactions, yet interventions to help people recognize them are limited. We present an experiment testing how AI-mediated dialogue influences recognition of ableism. 160 participants completed a pre-test, intervention, and a post-test across four conditions: AI nudges toward bias (Bias-Directed), inclusion (Neutral-Directed), unguided dialogue (Self-Directed), and a text-only non-dialogue (Reading). Participants rated scenarios on standardness of social experience and emotional impact; those in dialogue-based conditions also provided qualitative reflections. Quantitative results showed dialogue-based conditions produced stronger recognition than Reading, though trajectories diverged: biased nudges improved differentiation of bias from neutrality but increased overall negativity. Inclusive or no nudges remained more balanced, while Reading participants showed weaker gains and even declines. Qualitative findings revealed biased nudges were often rejected, while inclusive nudges were adopted as scaffolding. We contribute a validated vignette corpus, an AI-mediated intervention platform, and design implications highlighting trade-offs conversational systems face when integrating bias-related nudges.