The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows

arXiv cs.AI / 4/17/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that widespread LLM use is changing how people assess their own abilities, not just how much they rely on model outputs.
  • It introduces the “LLM fallacy,” a cognitive attribution error where users mistakenly treat LLM-assisted results as proof of their own independent competence.
  • The authors explain that LLM opacity, natural fluency, and low-friction interaction blur the boundary between human and machine contributions, encouraging inference from outcomes rather than from underlying processes.
  • The work connects the effect to related concepts like automation bias and cognitive offloading, while positioning it as a distinct attributional distortion specific to AI-mediated workflows.
  • It proposes a framework and typology across computational, linguistic, analytical, and creative tasks, and discusses implications for education, hiring, and AI literacy, along with plans for empirical validation.

Abstract

The rapid integration of large language models (LLMs) into everyday workflows has transformed how individuals perform cognitive tasks such as writing, programming, analysis, and multilingual communication. While prior research has focused on model reliability, hallucination, and user trust calibration, less attention has been given to how LLM usage reshapes users' perceptions of their own capabilities. This paper introduces the LLM fallacy, a cognitive attribution error in which individuals misinterpret LLM-assisted outputs as evidence of their own independent competence, producing a systematic divergence between perceived and actual capability. We argue that the opacity, fluency, and low-friction interaction patterns of LLMs obscure the boundary between human and machine contribution, leading users to infer competence from outputs rather than from the processes that generate them. We situate the LLM fallacy within existing literature on automation bias, cognitive offloading, and human--AI collaboration, while distinguishing it as a form of attributional distortion specific to AI-mediated workflows. We propose a conceptual framework of its underlying mechanisms and a typology of manifestations across computational, linguistic, analytical, and creative domains. Finally, we examine implications for education, hiring, and AI literacy, and outline directions for empirical validation. We also provide a transparent account of human--AI collaborative methodology. This work establishes a foundation for understanding how generative AI systems not only augment cognitive performance but also reshape self-perception and perceived expertise.