Three-Phase Transformer
arXiv cs.CL / 4/17/2026
📰 NewsModels & Research
Key Points
- The paper introduces the Three-Phase Transformer (3PT), a residual-stream structural prior for decoder-only Transformers built on a standard SwiGLU + RMSNorm + RoPE + GQA backbone.
- It partitions each hidden state into N cyclic channels, applying phase-respecting operations including per-channel RMSNorm, a channel-wise 2D Givens rotation between attention and FFN, and a GQA head alignment constraint.
- A key novelty is injecting a fixed “Gabriel’s horn” profile into a one-dimensional DC subspace that is orthogonal to the channels, designed to compose orthogonally with RoPE’s relative-position rotation.
- Experiments on WikiText-103 show that with 123M parameters, 3PT improves perplexity by 7.20% over a matched RoPE-only baseline, with faster convergence (1.93× step-count speedup), while results suggest N may be a tunable parameter-sharing knob rather than having a single unique optimum.
- The authors report analyses on self-stabilizing geometry, rotation-angle drift behavior (including a U-shaped depth profile), and orthogonal composition with RoPE, attention, and FFN.
Related Articles

FastAPI With LangChain and MongoDB
Dev.to
![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup
Dev.to

The AI Education Product on Product Hunt Worth Watching
Dev.to

The joy and pain of training an LLM from scratch
Reddit r/LocalLLaMA

Did you know that you can use Qwen3.5-35B-A3B-Base as an instruction/reasoning Model?
Reddit r/LocalLLaMA