Beyond Activation Alignment: The Geometry of Neural Sensitivity
arXiv cs.LG / 5/6/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper shows that commonly used activation-alignment metrics (RSA, CCA, CKA) may miss differences in how networks use local stimulus evidence, because global agreement between linear readouts does not imply similarity in sensitivity to small perturbations.
- It introduces a complementary framework that summarizes neural representations via local decodable information, using Fisher information and local representation geometry to characterize expected discriminability for perturbations within a chosen stimulus-coordinate subspace.
- The approach defines a “second-moment” family of local discrimination tasks and computes an operator that serves as a minimal, complete summary of dataset-level expected discriminability.
- It compares representations using a log-spectral distance over the manifold of symmetric positive definite (SPD) matrices, producing the Spectral Riemannian Alignment Score (S-RAS) and providing a multiplicative certificate for lifted task values.
- Experiments demonstrate that the framework can match corresponding layers across independently trained neural networks, enable transferable class-conditional probing, distinguish standard vs. robust training behaviors, and detect stimulus-coordinate family effects in mouse visual cortex data.
Related Articles

Top 10 Free AI Tools for Students in 2026: The Ultimate Study Guide
Dev.to

AI as Your Contingency Co-Pilot: Automating Wedding Day 'What-Ifs'
Dev.to

Google AI Releases Multi-Token Prediction (MTP) Drafters for Gemma 4: Delivering Up to 3x Faster Inference Without Quality Loss
MarkTechPost
When Claude Hallucinates in Court: The Latham & Watkins Incident and What It Means for Attorney Liability
MarkTechPost
Solidity LM surpasses Opus
Reddit r/LocalLLaMA