LASER: Low-Rank Activation SVD for Efficient Recursion

arXiv cs.LG / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies Tiny Recursive Models (TRMs) and shows their recursive activation trajectories lie in an effectively linear, low-dimensional subspace.
  • It demonstrates that the dominant activation directions can be tracked dynamically using inexpensive power iterations, revealing how weight-sharing concentrates computation into specific eigendirections.
  • The authors find this eigendirection concentration varies sharply across different computational sites during recursion.
  • They introduce LASER (Low-Rank Activation SVD for Efficient Recursion), which uses matrix-free subspace tracking with a fidelity-triggered reset to dynamically maintain a low-rank basis.
  • LASER achieves about 60% activation memory savings without statistically significant accuracy degradation, while motivating further questions about representational capacity and stability in recursive implicit reasoning.

Abstract

Recursive architectures such as Tiny Recursive Models (TRMs) perform implicit reasoning through iterative latent computation, yet the geometric structure of these reasoning trajectories remains poorly understood. We investigate the activation manifold of TRMs during recursive unrolling and find that activations occupy an effectively linear, low-dimensional subspace whose principal directions can be tracked dynamically with cheap power iterations. This suggests that weight-sharing concentrates iterative computation along a small number of dominant eigendirections, and we find that this concentration varies sharply across computational sites. We exploit this structure through LASER (Low-Rank Activation SVD for Efficient Recursion), a dynamic compression framework that maintains an evolving low-rank basis via matrix-free subspace tracking with a fidelity-triggered reset mechanism, achieving {\sim}60\% activation memory savings with no statistically significant accuracy degradation. Our analysis raises questions about how recursive architectures allocate representational capacity during implicit reasoning, and whether this concentration can be exploited to improve the efficiency and stability of latent computation.