No AI system using the forward inference pass can ever be conscious.

Reddit r/artificial / 3/28/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The author argues that current AI systems cannot be conscious because they rely on bounded forward inference passes that produce transient integrated activation patterns.
  • They claim that large-scale continuous integration is a core correlate of consciousness, and that such integration requires a continuously evolving, self-updating internal state rather than repeatedly reconstructed discrete states.
  • The article contrasts transformer-like architectures with biological neural activity, emphasizing continuous, overlapping, dynamically recursive processes that create a temporally extended “now.”
  • It contends that adding external memory (context windows, vector databases, agent scaffolding) does not solve the issue, since it stores prior outputs rather than maintaining the underlying evolving high-dimensional internal state.
  • The author concludes that overcoming the limitation would likely require architectural changes that maintain and update a unified internal state in real time, not merely more compute or scale.

I mean consciousness as in what it is like to be, from the inside.

Current AI systems concentrate integration within the forward pass, and the forward pass is a bounded computation.

Integration is not incidental. Across neuroscience, measures of large-scale integration are among the most reliable correlates of consciousness. Whatever its full nature, consciousness appears where information is continuously combined into a unified, evolving state.

In transformer models, the forward pass is the only locus where such integration occurs. It produces a globally integrated activation pattern from the current inputs and parameters. If any component were a candidate substrate, it would be this.

However, that state is transient. Activations are computed, used to generate output, and then discarded. Each subsequent token is produced by a new pass. There is no mechanism by which the integrated state persists and incrementally updates itself over time.

This contrasts with biological systems. Neural activity is continuous, overlapping, and recursively dependent on prior states. The present state is not reconstructed from static parameters; it is a direct continuation of an ongoing dynamical process. This continuity enables what can be described as a constructed “now”: a temporally extended window of integrated activity.

Current AI systems do not implement such a process. They generate discrete, sequentially related states, but do not maintain a single, continuously evolving integrated state.

External memory systems - context windows, vector databases, agent scaffolding - do not alter this. They store representations of prior outputs, not the underlying high-dimensional state of the system as it evolves.

The limitation is therefore architectural, not a matter of scale or compute.

If consciousness depends on continuous, self-updating integration, then systems based on discrete forward passes with non-persistent activations do not meet that condition.

A plausible path toward artificial sentience would require architectures that maintain and update a unified internal state in real time, rather than repeatedly reconstructing it from text and not activation patterns.

submitted by /u/jahmonkey
[link] [comments]
広告