Hyperloop Transformers

arXiv cs.LG / 4/24/2026

📰 NewsModels & Research

Key Points

  • The paper presents a new LLM architecture, the Hyperloop Transformer, designed to improve parameter efficiency under memory and latency constraints.
  • It uses “looped Transformers” that reuse Transformer layers across depth, applying only the middle block recurrently while the begin and end blocks are fixed.
  • The architecture further enhances the recurrent middle block with hyper-connections that expand the residual stream into matrix-valued residual streams, with hyper-connections added only after each loop to minimize extra compute.
  • Experiments across multiple model scales show improved performance over depth-matched and mHC Transformer baselines while using about 50% fewer parameters.
  • The performance gains remain after post-training weight quantization, suggesting the approach is well-suited for memory-efficient language modeling on constrained devices.

Abstract

LLM architecture research generally aims to maximize model quality subject to fixed compute/latency budgets. However, many applications of interest such as edge and on-device deployment are further constrained by the model's memory footprint, thus motivating parameter-efficient architectures for language modeling. This paper describes a simple architecture that improves the parameter-efficiency of LLMs. Our architecture makes use of looped Transformers as a core primitive, which reuse Transformer layers across depth and are thus more parameter-efficient than ordinary (depth-matched) Transformers. We organize the looped Transformer into three blocks--begin, middle, and end blocks--where each block itself consists of multiple Transformer layers, and only the middle block is applied recurrently across depth. We augment the looped middle block with hyper-connections (Xie et al., 2026), which expand the residual stream into matrix-valued residual streams. Hyper-connections are applied only after each loop, and therefore add minimal new parameters and compute cost. Across various model scales, we find that our Hyper-Connected Looped Transformer (Hyperloop Transformer) is able to outperform depth-matched Transformer and mHC Transformer baselines despite using approximately 50% fewer parameters. The outperformance persists through post-training weight quantization, thus positioning Hyperloop Transformers as an attractive architecture for memory-efficient language modeling.