Nexusformer: Nonlinear Attention Expansion for Stable and Inheritable Transformer Scaling

arXiv cs.LG / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that Transformer scaling is hard without retraining from scratch because standard attention uses linear Q/K/V projections that restrict feature extraction to fixed-dimensional subspaces.
  • It proposes “Nexusformer,” which replaces linear Q/K/V projections with a Nexus-Rank layer: a three-stage nonlinear mapping using dual activations across progressively higher-dimensional spaces.
  • The method supports “lossless” structured growth by injecting new capacity through zero-initialized blocks, which are designed to preserve pretrained representations while adding incremental capability.
  • Experiments on language modeling and reasoning benchmarks show Nexusformer can match Tokenformer perplexity while using up to 41.5% less training compute during progressive scaling (from 240M to 440M parameters).
  • The authors analyze growth dynamics and show that zero initialization yields a stable convergence path, enabling a geometric scaling law that predicts performance across expansion sizes.

Abstract

Scaling Transformers typically necessitates training larger models from scratch, as standard architectures struggle to expand without discarding learned representations. We identify the primary bottleneck in the attention mechanism's linear projections, which strictly confine feature extraction to fixed-dimensional subspaces, limiting both expressivity and incremental capacity. To address this, we introduce Nexusformer, which replaces linear Q/K/V projections with a Nexus-Rank layer, a three-stage nonlinear mapping driven by dual activations in progressively higher dimensional spaces. This design overcomes the linearity constraint and enables lossless structured growth: new capacity can be injected along two axes via zero-initialized blocks that preserve pretrained knowledge. Experiments on language modeling and reasoning benchmarks demonstrate that Nexusformer matches Tokenformer's perplexity using up to 41.5\% less training compute during progressive scaling (240M to 440M). Furthermore, our analysis of growth dynamics reveals that zero initialization induces a stable convergence trajectory, allowing us to derive a geometric scaling law that accurately predicts performance across expansion scales.