Ipromptism: Is AI Creating a New Form of Illiteracy?
Over the past two years, something fundamental has changed in the way we produce knowledge.
We no longer write first.
We prompt.
Large language models like ChatGPT, Claude, or Gemini have transformed the act of writing into something different: instead of directly producing text, we increasingly instruct machines to generate it for us.
The workflow has shifted.
Instead of thinking → structuring → writing, we now often:
- formulate a prompt
- generate text
- refine the output
This shift is incredibly powerful. It allows individuals to produce articles, reports, and code faster than ever before.
But it also raises an uncomfortable question:
What happens to our cognitive skills when machines begin to structure language and thought on our behalf?
This question led me to explore a concept I call Ipromptism.
I first explored this concept in a blog article introducing the idea of "Ipromptism".
https://blog.la-mine.io/article/ipromptisme-nouvel-illettrisme-ia-generative
The idea is developed in more detail in my research paper:
Ipromptism: Towards a New Illiteracy in the Age of Generative Artificial Intelligence?
Ipromptism describes a cultural and cognitive shift where humans increasingly rely on prompts to externalize the production of language and structured thinking to AI systems.
The goal is not to criticize AI tools. They are extraordinary amplifiers of human capability.
However, like every major technological shift — from calculators to GPS — they may also transform the way our cognitive abilities develop and are maintained.
If writing is a form of thinking, what happens when we stop writing ourselves?
Definition
Ipromptism describes the cognitive shift in which prompting becomes the primary interface between human thought and knowledge production systems.
Traditional cognition
Think → Structure → Write
Ipromptism
Prompt → Generate → Edit
What is Ipromptism?
Ipromptism refers to a new interaction paradigm between humans and knowledge production systems.
Traditionally, writing has been a direct cognitive activity. When we write, we are forced to:
- organize ideas
- structure arguments
- choose precise words
- refine meaning
In other words, writing is not merely a communication tool — it is also a thinking process.
Large language models change this dynamic.
Instead of producing language directly, users now guide the production of language through prompts. The human provides instructions, while the AI performs the linguistic generation.
In its simplest form, the workflow becomes:
Prompt → Generate → Edit
Rather than:
Think → Structure → Write
This transformation creates a new form of interaction with knowledge.
The user becomes less of a writer and more of a director of generation.
Prompting therefore introduces a new type of literacy: the ability to instruct AI systems effectively. Writing the right prompt requires understanding context, goals, and constraints.
However, this shift also raises a deeper question.
If language generation becomes externalized to machines, are we witnessing the emergence of a new form of illiteracy — not the inability to read or write, but the gradual loss of practice in producing structured thought independently?
Ipromptism does not claim that this outcome is inevitable. Instead, it highlights a transition that may redefine what it means to be literate in the age of AI.
The Shift: From Writing to Prompting
One of the most profound transformations introduced by large language models is not simply the automation of writing, but the redefinition of the writing process itself.
For centuries, producing text followed a relatively stable cognitive sequence:
Think → Structure → Write
Writing required individuals to actively organize ideas, construct arguments, and refine language. The act of writing itself was inseparable from the act of thinking.
With the emergence of generative AI systems, this sequence is increasingly changing.
Today, the workflow often looks more like this:
Prompt → Generate → Edit
Instead of producing language directly, the user formulates instructions that guide the AI system. The machine then generates structured text, which the user reviews and modifies.
This shift fundamentally changes the role of the human participant.
Rather than acting as the direct producer of language, the user becomes a curator, editor, or director of generated content.
In many contexts, this transformation is extremely beneficial. Generative systems allow individuals to draft reports, brainstorm ideas, summarize research, or write code in a fraction of the time previously required.
Productivity gains can be significant.
However, this shift also raises a critical question about the long-term relationship between cognition and language production.
If structured language is increasingly generated externally, what happens to the cognitive processes traditionally exercised through writing?
Cognitive Outsourcing
The growing reliance on AI systems for language generation can be understood as a form of cognitive outsourcing.
Throughout history, humans have regularly delegated certain cognitive tasks to tools.
Research on digital cognition already suggests that externalizing information can change how memory works.
In a well-known study, Sparrow, Liu, and Wegner (2011) found that when people know information is stored externally (for example on the internet), they are less likely to remember the information itself and more likely to remember where to find it.
This phenomenon has been described as the "Google effect on memory".
(Sparrow, Liu & Wegner, 2011)
https://www.science.org/doi/10.1126/science.1207745
Calculators reduced the need for manual arithmetic.
GPS systems reduced our reliance on spatial navigation skills.
Spell-checkers reduced the cognitive effort required for orthographic accuracy.
Neuroscience research supports this idea. Studies on navigation strategies have shown that different cognitive systems are activated depending on how people navigate.
For example, spatial learning tends to activate the hippocampus, a brain region associated with memory formation, while GPS-like navigation strategies rely more on the caudate nucleus.
(Bohbot et al., 2007)
https://doi.org/10.1016/j.neuroimage.2007.01.044
In each case, tools expanded human capability while simultaneously altering the way certain skills were practiced and maintained.
Large language models represent a similar, but potentially deeper, transformation.
Unlike calculators or navigation tools, generative AI does not merely assist with a specific task. It participates directly in the production of language, reasoning, and structured knowledge.
When users rely on AI systems to draft arguments, explain concepts, or structure ideas, part of the cognitive workload associated with these tasks is effectively transferred to the machine.
This does not necessarily mean that human thinking disappears. In many cases, users remain deeply involved in reviewing, editing, and guiding generated outputs.
However, the frequency and intensity of direct cognitive practice may change.
If writing is one of the primary ways humans exercise structured thinking, then reducing the need to write independently may gradually alter how those cognitive abilities develop and persist.
This possibility does not imply a deterministic decline in human intelligence. Rather, it suggests that the relationship between cognition, language, and technology is entering a new phase.
Ipromptism attempts to describe this transition.
A New Form of Illiteracy?
If large language models increasingly generate structured text for us, an important question emerges:
Could AI contribute to a new form of illiteracy?
Traditionally, illiteracy refers to the inability to read or write. But in the context of generative AI, the concern is different. The risk is not that people will forget how to write entirely, but that they may gradually lose the habit of producing structured language and reasoning independently.
In many countries, illiteracy remains a significant social challenge.
For example, data from the French National Agency for the Fight Against Illiteracy (ANLCI) shows that millions of adults still struggle with basic reading and writing skills.
https://www.anlci.gouv.fr/illettrisme/chiffres-cles
Writing is more than communication. It is one of the primary tools through which humans clarify ideas, test arguments, and refine understanding.
When we write, we engage in a process of cognitive effort: selecting concepts, organizing logic, and shaping meaning. This effort plays a key role in the development and maintenance of intellectual skills.
If AI systems increasingly perform these tasks for us, the cognitive exercise associated with writing may become less frequent.
This does not imply that generative AI inevitably leads to intellectual decline. Technology has always reshaped human skills rather than simply eliminating them.
However, it does suggest that our relationship with language production may be undergoing a structural change.
Ipromptism highlights this possibility: a cultural shift where the production of structured language gradually moves from humans to machines.
Prompting as a New Literacy
At the same time, prompting itself can be seen as a new form of literacy.
Using AI systems effectively requires more than typing a simple instruction. It involves understanding context, defining goals, structuring requests, and iterating based on generated outputs.
Recent research has also begun to examine the cognitive impact of generative AI on learning.
Some studies suggest that heavy reliance on AI systems may encourage what researchers call "metacognitive laziness", where users become increasingly dependent on automated reasoning rather than engaging in deeper reflection.
(Fan et al., 2025)
https://doi.org/10.1111/bjet.13583
Good prompts often require:
- clarity of intent
- understanding of the problem domain
- the ability to evaluate and refine generated results
In this sense, prompting becomes a meta-writing skill. Instead of directly producing the final text, the user designs the conditions under which the text will be generated.
The role of the human shifts from writer to architect of generation.
This transformation does not eliminate cognitive effort; it redistributes it. Users must learn how to guide AI systems effectively, interpret outputs critically, and maintain intellectual responsibility for the final result.
Prompt literacy may therefore become an essential competence in the age of generative AI.
Early Institutional Recognition
The concept of Ipromptism is already beginning to appear in discussions about AI literacy and education.
For example, the French National Artificial Intelligence Association (ANIA) recently highlighted the concept when discussing the emerging cognitive divide between individuals who can effectively interact with generative AI systems and those who cannot.
According to this perspective, the ability to formulate structured prompts, critically evaluate AI outputs, and guide machine reasoning may become a core professional skill in the coming years.
The Future: Hybrid Intelligence
Rather than replacing human cognition, generative AI may lead to a new model of hybrid intelligence.
In this model, humans and machines perform complementary roles.
AI systems excel at:
- rapid generation of language
- summarization and synthesis
- pattern recognition across large datasets
Humans remain essential for:
- critical reasoning
- creativity and conceptual innovation
- ethical judgment and contextual understanding
The most effective knowledge production systems of the future may therefore combine both forms of intelligence.
Ipromptism should not be interpreted solely as a warning. It is also a framework for understanding how human cognition and AI systems are beginning to co-evolve.
The real challenge may not be whether AI makes us illiterate.
The real challenge is determining what new forms of literacy will define intellectual competence in the age of AI.
The full research paper and related work are available here:
https://independent.academia.edu/RomainBailleul
Discussion
What do you think?
Does prompting enhance human thinking by accelerating knowledge production, or could it gradually replace some of the cognitive processes traditionally involved in writing?
Are we witnessing the emergence of a new literacy, or the beginning of a new form of intellectual dependency?
I would be curious to hear how developers, researchers, and writers experience this shift in their own work.
If you're interested in research and reflections on AI, automation, and digital transformation, you can explore more of my work here:
Author
Romain Bailleul is an independent researcher and AI consultant exploring the cognitive and societal impact of generative artificial intelligence.
Website: Romain Bailleul
Academia: Academia Romain Bailleul Profile




