Hierarchical vs. Flat Iteration in Shared-Weight Transformers
arXiv cs.CL / 4/17/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies whether a hierarchically structured, shared-weight recurrent scheme in Transformers (HRM-LM) can achieve the same representational quality as stacking independent Transformer layers.
- HRM-LM replaces N Transformer layers with a two-speed recurrent design, using a Fast module at every step for local refinement and a Slow module every T steps for global compression.
- The method is unrolled for M = N×T recurrent steps using shared parameters across the unrolled computation.
- Using a parameter-matched Universal Transformer ablation (UniTF, 1.2B) across five independent runs, the authors find a robust and sharp empirical performance/representation gap between HRM-LM and the baseline stacking approach.


![[2026] OpenTelemetry for LLM Observability — Self-Hosted Setup](/_next/image?url=https%3A%2F%2Fmedia2.dev.to%2Fdynamic%2Fimage%2Fwidth%3D1200%2Cheight%3D627%2Cfit%3Dcover%2Cgravity%3Dauto%2Cformat%3Dauto%2Fhttps%253A%252F%252Fdev-to-uploads.s3.amazonaws.com%252Fuploads%252Farticles%252Flu4b6ttuhur71z5gemm0.png&w=3840&q=75)
