AI Navigate

MANAR: Memory-augmented Attention with Navigational Abstract Conceptual Representation

arXiv cs.AI / 3/20/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • MANAR generalizes standard multi-head attention by introducing a memory-augmented central workspace and an Abstract Conceptual Representation inspired by Global Workspace Theory.
  • It defines an integration phase that aggregates retrieved memory concepts into a global ACR and a broadcasting phase that uses this state to contextualize local tokens.
  • The architecture achieves linear-time scaling by routing information through a constant-sized ACR, mitigating the quadratic complexity of traditional attention.
  • It is re-parameterizable to enable knowledge transfer from pretrained transformers via weight-copy, reducing adoption barriers compared with other linear-time alternatives.
  • Empirical results across language, vision, and speech tasks show competitive performance (GLUE 85.1, ImageNet-1K 83.9%, LibriSpeech 2.7% WER), positioning MANAR as an efficient and expressive alternative to quadratic attention.

Abstract

MANAR (Memory-augmented Attention with Navigational Abstract Conceptual Representation), contextualization layer generalizes standard multi-head attention (MHA) by instantiating the principles of Global Workspace Theory (GWT). While MHA enables unconstrained all-to-all communication, it lacks the functional bottleneck and global integration mechanisms hypothesized in cognitive models of consciousness. MANAR addresses this by implementing a central workspace through a trainable memory of abstract concepts and an Abstract Conceptual Representation (ACR). The architecture follows a two-stage logic that maps directly to GWT mechanics: (i) an integration phase, where retrieved memory concepts converge to form a collective "mental image" (the ACR) based on input stimuli; and (ii) a broadcasting phase, where this global state navigates and informs the contextualization of individual local tokens. We demonstrate that efficient linear-time scaling is a fundamental architectural byproduct of instantiating GWT functional bottleneck, as routing global information through a constant-sized ACR resolves the quadratic complexity inherent in standard attention. MANAR is a compatible re-parameterization of MHA with identical semantic roles for its projections, enabling knowledge transfer from pretrained transformers via weight-copy and thus overcoming the adoption barriers of structurally incompatible linear-time alternatives. MANAR enables non-convex contextualization, synthesizing representations that provably lie outside the convex hull of input tokens - a mathematical reflection of the creative synthesis described in GWT. Empirical evaluations confirm that MANAR matches or exceeds strong baselines across language (GLUE score of 85.1), vision (83.9% ImageNet-1K), and speech (2.7% WER on LibriSpeech), positioning it as an efficient and expressive alternative to quadratic attention.