Inference-Path Optimization via Circuit Duplication in Frozen Visual Transformers for Marine Species Classification
arXiv cs.CV / 4/7/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper explores improving label-efficient marine species classification that uses frozen embeddings from self-supervised vision foundation models (e.g., DINOv3) without any fine-tuning or weight changes.
- It applies “Circuit Duplication,” an inference-time technique from LLMs, duplicating a chosen range of transformer layers during the forward pass to strengthen representations.
- On the class-imbalanced AQUA20 benchmark, both global and class-specific circuit selection outperform the standard single-pass frozen forward, with class-specific selection performing best.
- With the highest label budget, class-specific selection achieves macro F1=0.875, nearly matching the fully supervised ConvNeXt benchmark (0.889) and nearly closing the gap without gradient-based training.
- The results indicate strong class-dependent gains (about 75% of classes benefit from class-specific circuits), and the work claims the first application of Circuit Duplication to computer vision.
Related Articles

Black Hat Asia
AI Business
[R] The ECIH: Model Modeling Agentic Identity as an Emergent Relational State [R]
Reddit r/MachineLearning
Google DeepMind Unveils Project Genie: The Dawn of Infinite AI-Generated Game Worlds
Dev.to
Artificial Intelligence and Life in 2030: The One Hundred Year Study onArtificial Intelligence
Dev.to
Stop waiting for Java to rebuild! AI IDEs + Zero-Latency Hot Reload = Magic
Dev.to