Representation Selection via Cross-Model Agreement using Canonical Correlation Analysis
arXiv cs.CV / 4/2/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a training-free post-hoc method that applies canonical correlation analysis (CCA) to find linear projections for selecting and reducing redundant visual representation dimensions across two pretrained image encoders.
- By exploiting cross-model agreement, the approach aims to retain shared semantic content while discarding overcomplete or model-specific dimensions more effectively than single-model dimensionality reduction like PCA.
- Experiments across datasets such as ImageNet-1k, CIFAR-100, and MNIST show that representations can be reduced by over 75% dimensionality while improving downstream performance.
- The method can also be used at fixed dimensionality to transfer or refine representations from larger or fine-tuned models, yielding accuracy improvements reported up to 12.6% over PCA and baseline projections.
Related Articles

Black Hat Asia
AI Business

Unitree's IPO
ChinaTalk

Did you know your GIGABYTE laptop has a built-in AI coding assistant? Meet GiMATE Coder 🤖
Dev.to

Benchmarking Batch Deep Reinforcement Learning Algorithms
Dev.to
A bug in Bun may have been the root cause of the Claude Code source code leak.
Reddit r/LocalLLaMA