Untrained CNNs Match Backpropagation at V1: A Systematic RSA Comparison of Four Learning Rules Against Human fMRI
arXiv cs.LG / 4/21/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The study systematically compares four learning rules—backpropagation, feedback alignment, predictive coding, and STDP—using identical CNN architectures and evaluates alignment to human visual cortex via representational similarity analysis (RSA) on THINGS-fMRI data.
- A key result is that early visual cortex alignment (V1/V2) is largely determined by the network architecture rather than the learning rule, with an untrained random-weight CNN performing similarly to backpropagation.
- At higher visual areas (LOC/IT), differences emerge: backpropagation yields superior alignment at LOC/IT, while predictive coding with local Hebbian updates matches backpropagation at IT.
- Feedback alignment underperforms, producing representations below the random baseline in V1, and the findings remain robust after controlling for pixel-level similarity.
- Overall, the authors conclude that “learning rule vs cortical alignment” is region-specific: architecture drives early alignment, whereas supervised objectives are more important for late (higher-area) alignment.
Related Articles

Every time a new model comes out, the old one is obsolete of course
Reddit r/LocalLLaMA

We built it during the NVIDIA DGX Spark Full-Stack AI Hackathon — and it ended up winning 1st place overall 🏆
Dev.to

Stop Losing Progress: Setting Up a Pro Jupyter Workflow in VS Code (No More Colab Timeouts!)
Dev.to

Building AgentOS: Why I’m Building the AWS Lambda for Insurance Claims
Dev.to

Where we are. In a year, everything has changed. Kimi - Minimax - Qwen - Gemma - GLM
Reddit r/LocalLLaMA