AI Washing Inflates Expected Performance but Not Interaction Outcomes: An AI Placebo Study Using Fitts' Law

arXiv cs.AI / 5/4/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The study tests whether “AI washing” (marketing a device as AI-enabled despite limited real AI functionality) can create placebo-like effects on user expectations during human-computer interaction.
  • In an experiment with 28 participants using Fitts’ Law mouse tasks, people in placebo conditions (supposed predictive or biosignal-enhanced AI support) reported significantly higher expected performance than in the no-support baseline.
  • Despite inflated expectations, the placebo framing produced no measurable differences in objective Fitts’ Law performance metrics or in subjective evaluations such as perceived workload and perceived usability.
  • The findings suggest that deceptive AI claims can manipulate what users think will happen without changing the interaction outcomes, raising transparency and accountability concerns.
  • The paper proposes Fitts’ Law as a rigorous auditing method to evaluate “AI-labeled” input devices rather than relying on marketing claims.

Abstract

Expectations about the support of artificial intelligence (AI) may influence interaction outcomes similar to placebos. Such expectations may result from AI washing, a practice of overstating a system's AI capabilities when actual functionality is limited. For example, some computer mice are marketed as "AI-assisted" despite lacking AI in core functions. In a within-subjects study, 28 participants completed Fitts' Law tasks with a computer mouse under three conditions: no support, supposed predictive AI support, and supposed biosignal-enhanced AI support. Objective Fitts' Law performance indicators and subjective performance expectations, perceived workload, and perceived usability were measured. Compared to baseline, participants expected significantly improved performance in placebo conditions. However, these expectations did not translate into differences in objective or subjective assessments. This paper contributes evidence that AI washing inflates user expectations without altering actual interaction outcomes, highlighting a critical transparency issue. By exposing how deceptive AI marketing can shape user expectations, we underscore the need for accountability in AI product claims. Further, we establish Fitts' Law as a rigorous methodological lens for auditing AI-labelled input devices.

AI Washing Inflates Expected Performance but Not Interaction Outcomes: An AI Placebo Study Using Fitts' Law | AI Navigate