LLM comprehension question

Reddit r/artificial / 4/12/2026

💬 OpinionIdeas & Deep Analysis

Key Points

  • The post asks whether people feel lingering confusion when LLMs explain complex concepts or provide long-form dives, even when the information is accurate and clearly communicated.
  • The author attributes the discomfort to the way language is phrased and the flow of the text, describing it as an “uncanny valley” of comprehension.
  • They speculate that mimicry of human language (without truly “organic” language grounding) may prevent readers’ brains from forming an intuitive logic path, leaving them subtly “stranded.”
  • The post frames the experience as personal and invites others to share whether they perceive similar comprehension effects.

Basically, does anyone else also get a really strange sense of lingering confusion and non-comprehension when an LLM explains a complex concept or tries to give a long format dive into something?

It's not that they necessarily get it wrong, most often they can communicate the information cleanly and accurately, especially in things like, AI scripted youtube videos where they creator had their finger on the pulse of the informaiton. It's just something about the way it's said and the flow of the actual language itself, that feels like some sort of comprehension uncanny valley.

It might just be me, but im curious to know if other people feel this because it makes me wonder if there's some kind of organic funk in the way we talk as people that makes it easier to understand an effective human explanation over an LLM. Maybe the fundamental practices of generating outputs that mimic human lanaguage rather than actual organic language means our brains can't quite find that logic to follow and it leaves us ever-so subconciously stranded?

Just a random late-night ponder.

submitted by /u/Skyfox585
[link] [comments]