The Topological Trouble With Transformers

arXiv cs.LG / 4/21/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • Transformers represent structure by using an expanding context history, but their feedforward design makes it difficult to perform true dynamic state tracking over time.
  • Because state tracking involves sequential dependencies, feedforward models tend to push evolving information deeper into later layers, making it hard to access in earlier (shallow) layers and effectively hitting a depth bottleneck.
  • Workarounds such as dynamic-depth models or explicitly/implicitly externalized “thinking” can reduce the bottleneck, but they are often computationally and memory inefficient.
  • The article argues for a shift toward temporally extended cognition implemented via recurrent architectures, and proposes a taxonomy based on whether recurrence occurs along depth or along time steps.
  • It also highlights future research directions, including improved state-space models and coarse-grained recurrence, to better integrate state tracking into modern foundation models.

Abstract

Transformers encode structure in sequences via an expanding contextual history. However, their purely feedforward architecture fundamentally limits dynamic state tracking. State tracking -- the iterative updating of latent variables reflecting an evolving environment -- involves inherently sequential dependencies that feedforward networks struggle to maintain. Consequently, feedforward models push evolving state representations deeper into their layer stack with each new input step, rendering information inaccessible in shallow layers and ultimately exhausting the model's depth. While this depth limit can be bypassed by dynamic depth models and by explicit or latent thinking that externalizes state representations, these solutions are computationally and memory inefficient. In this article, we argue that temporally extended cognition requires refocusing from explicit thought traces to implicit activation dynamics via recurrent architectures. We introduce a taxonomy of recurrent and continuous-thought transformer architectures, categorizing them by their recurrence axis (depth versus step) and their ratio of input tokens to recurrence steps. Finally, we outline promising research directions, including enhanced state-space models and coarse-grained recurrence, to better integrate state tracking into modern foundation models.