Strategic Polysemy in AI Discourse: A Philosophical Analysis of Language, Hype, and Power

arXiv cs.LG / 4/24/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that key AI terms such as “hallucination,” “chain-of-thought,” “alignment,” and “agent” often work as strategically polysemous words, carrying both technical meanings and broader everyday or anthropomorphic connotations at the same time.
  • It introduces “glosslighting” as a mechanism where actors use technically redefined terms to trigger intuitive (sometimes misleading) associations while retaining plausible deniability via narrow definitions.
  • The authors claim this semantic flexibility has institutional and discursive consequences, affecting how AI systems are interpreted by researchers, policymakers, funders, and the public.
  • The paper links glosslighting to AI hype cycles and the mobilization of investment and institutional support, potentially deflecting epistemic and ethical scrutiny.
  • Overall, it frames language as a sociotechnical tool that shapes both AI development and AI governance through how meanings are managed in public discourse.

Abstract

This paper examines the strategic use of language in contemporary artificial intelligence (AI) discourse, focusing on the widespread adoption of metaphorical or colloquial terms like "hallucination", "chain-of-thought", "introspection", "language model", "alignment", and "agent". We argue that many such terms exhibit strategic polysemy: they sustain multiple interpretations simultaneously, combining narrow technical definitions with broader anthropomorphic or common-sense associations. In contemporary AI research and deployment contexts, this semantic flexibility produces significant institutional and discursive effects, shaping how AI systems are understood by researchers, policymakers, funders, and the public. To analyse this phenomenon, we introduce the concept of glosslighting: the practice of using technically redefined terms to evoke intuitive -- often anthropomorphic or misleading -- associations while preserving plausible deniability through restricted technical definitions. Glosslighting enables actors to benefit from the persuasive force of familiar language while maintaining the ability to retreat to narrower definitions when challenged. We argue that this practice contributes to AI hype cycles, facilitates the mobilisation of investment and institutional support, and influences public and policy perceptions of AI systems, while often deflecting epistemic and ethical scrutiny. By examining the linguistic dynamics of glosslighting and strategic polysemy, the paper highlights how language itself functions as a sociotechnical mechanism shaping the development and governance of AI.