Three-Phase Transformer

arXiv cs.CL / 4/17/2026

📰 NewsModels & Research

Key Points

  • The paper introduces the Three-Phase Transformer (3PT), a residual-stream structural prior for decoder-only Transformers built on a standard SwiGLU + RMSNorm + RoPE + GQA backbone.
  • It partitions each hidden state into N cyclic channels, applying phase-respecting operations including per-channel RMSNorm, a channel-wise 2D Givens rotation between attention and FFN, and a GQA head alignment constraint.
  • A key novelty is injecting a fixed “Gabriel’s horn” profile into a one-dimensional DC subspace that is orthogonal to the channels, designed to compose orthogonally with RoPE’s relative-position rotation.
  • Experiments on WikiText-103 show that with 123M parameters, 3PT improves perplexity by 7.20% over a matched RoPE-only baseline, with faster convergence (1.93× step-count speedup), while results suggest N may be a tunable parameter-sharing knob rather than having a single unique optimum.
  • The authors report analyses on self-stabilizing geometry, rotation-angle drift behavior (including a U-shaped depth profile), and orthogonal composition with RoPE, attention, and FFN.

Abstract

We present Three-Phase Transformer (3PT), a residual-stream structural prior for decoder-only Transformers on a standard SwiGLU + RMSNorm + RoPE + GQA backbone. The hidden vector is partitioned into N equally-sized cyclic channels, each maintained by phase-respecting ops: a per-channel RMSNorm, a 2D Givens rotation between attention and FFN that rotates each channel by theta + i*(2*pi/N), and a head-count constraint aligning GQA heads with the partition. The architecture is a self-stabilizing equilibrium between scrambling and re-imposition, not a bolted-on module. The partition carves out a one-dimensional DC subspace orthogonal to the channels, into which we inject a fixed Gabriel's horn profile r(p) = 1/(p+1) as an absolute-position side-channel composing orthogonally with RoPE's relative-position rotation. The canonical N=3 borrows its metaphor from balanced three-phase AC, where three sinusoids 120 degrees apart sum to zero with no anti-correlated pair. At 123M parameters on WikiText-103, 3PT achieves -7.20% perplexity (-2.62% bits-per-byte) over a matched RoPE-Only baseline at +1,536 parameters (0.00124% of total), with 1.93x step-count convergence speedup (1.64x wall-clock). N behaves as a parameter-sharing knob rather than a unique optimum: at 5.5M an N-sweep over {1,2,3,4,6,8,12} is near-monotone with N=1 winning; at 123M a three-seed sweep finds N=3 and N=1 statistically indistinguishable. The load-bearing mechanism is the channel-partitioned residual stream, per-block rotation, per-phase normalization, and horn DC injection. We characterize (a) self-stabilization of the geometry without explicit enforcement, a novel instance of the conservation-law framework for neural networks; (b) a U-shaped depth profile of rotation-angle drift at 12 layers; (c) orthogonal composition with RoPE, attention, and FFN.