Is Libet-style “free will illusion” a general property of hierarchical systems (brains and models)?

Reddit r/artificial / 4/14/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The article draws a parallel between Libet’s findings that brain activity precedes conscious awareness and similar timing/visibility gaps in hierarchical ML systems.
  • It argues that lower layers in both brains and models effectively “commit” to a decision or state earlier, while higher layers only later make that commitment observable or interpretable.
  • Using this analogy, it suggests that what subjectively feels like intention forming may actually be a downstream readout of an earlier, non-conscious process rather than the point where the choice is generated.
  • The broader claim is that complex hierarchical systems may generally exhibit a “reporting layer vs. making layer” mismatch, blurring how we distinguish deterministic machines from free agents at the phenomenological level.
  • It concludes that this does not mean machines have free will, but that the same structural property could produce a “free will illusion” in systems like biological brains.

I’ve been thinking about a parallel between the classic Libet experiment and how decisions seem to form in layered ML systems.

Libet found that the brain’s readiness potential starts ~550ms before movement, but the feeling of deciding only shows up ~200ms before. So the neural “commitment” appears ~350ms before conscious awareness. This has often been taken as evidence that free will is an illusion — the brain decides before “you” do.

What’s interesting is that you see a structurally similar pattern in hierarchical models:

Lower-level processes effectively “commit” to a direction/state. That commitment only becomes visible later in higher-level representations (i.e. what you can actually observe or interpret)

So in both cases: the system's "output layer" — conscious awareness in Libet, spectral visibility in AI — is downstream of the actual commitment point. What feels like intention forming is actually intention being read, not written. The write happened earlier, in a layer that doesn't have direct phenomenal access.

That raises a broader question:

Is this a general property of complex hierarchical systems — that the layer reporting a decision isn’t the layer that made it? This collapses the distinction between "deterministic machine" and "free agent" — not because machines have free will, but because the biological substrate that generates the feeling of free will is doing the same thing machines do.

submitted by /u/Naive_Weakness6436
[link] [comments]