Revisiting Anisotropy in Language Transformers: The Geometry of Learning Dynamics
arXiv cs.CL / 4/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper revisits anisotropy in Transformer-based language models, arguing that it complicates geometric interpretations of learning dynamics.
- It provides geometric explanations for how frequency-biased sampling reduces “curvature visibility” and why training tends to amplify tangent directions.
- The authors introduce an empirical method that uses concept-based mechanistic interpretability during training to fit low-rank tangent proxies derived from activations.
- These activation-derived tangent directions are evaluated against true gradients from standard backpropagation, showing they capture disproportionately large gradient energy and a larger portion of gradient anisotropy than matched controls.
- Results are reported across both encoder-style and decoder-style language models, supporting a tangent-aligned account of anisotropy.
Related Articles

Black Hat Asia
AI Business

Apple is building smart glasses without a display to serve as an AI wearable
THE DECODER

Why Fashion Trend Prediction Isn’t Enough Without Generative AI
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Chatbot vs Voicebot: The Real Business Decision Nobody Talks About
Dev.to