MANAR: Memory-augmented Attention with Navigational Abstract Conceptual Representation
arXiv cs.AI / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- MANAR generalizes standard multi-head attention by introducing a memory-augmented central workspace and an Abstract Conceptual Representation inspired by Global Workspace Theory.
- It defines an integration phase that aggregates retrieved memory concepts into a global ACR and a broadcasting phase that uses this state to contextualize local tokens.
- The architecture achieves linear-time scaling by routing information through a constant-sized ACR, mitigating the quadratic complexity of traditional attention.
- It is re-parameterizable to enable knowledge transfer from pretrained transformers via weight-copy, reducing adoption barriers compared with other linear-time alternatives.
- Empirical results across language, vision, and speech tasks show competitive performance (GLUE 85.1, ImageNet-1K 83.9%, LibriSpeech 2.7% WER), positioning MANAR as an efficient and expressive alternative to quadratic attention.
Related Articles

Attacks On Data Centers, Qwen3.5 In All Sizes, DeepSeek’s Huawei Play, Apple’s Multimodal Tokenizer
The Batch

Your AI generated code is "almost right", and that is actually WORSE than it being "wrong".
Dev.to

Lessons from Academic Plagiarism Tools for SaaS Product Development
Dev.to

**Core Allocation Optimization for Energy‑Efficient Multi‑Core Scheduling in ARINC650 Systems**
Dev.to

KI in der amtlichen Recherche beim DPMA: Was Patentanwälte bei Neuanmeldungen jetzt beachten sollten (Stand: März 2026)
Dev.to