Taking Shortcuts for Categorical VQA Using Super Neurons
arXiv cs.CV / 3/12/2026
📰 NewsModels & Research
Key Points
- The authors propose Super Neurons (SNs): probing scalar activations from vision-language models to build accurate, training-free classifiers for diverse tasks.
- SNs enable extreme early exiting by locating discriminative neurons in shallow layers, allowing exit at the first generated token and reducing computation.
- Compared with baselines, SNs improve classification performance while achieving up to 5.10x speedup.
- The approach shifts focus from sparse attention vectors to raw activations, expanding the parameter search space for task-specific classifiers in vision-language models.
Related Articles

報告:LLMにおける「自己言及的再帰」と「ステートフル・エミュレーション」の観測
note

諸葛亮 孔明老師(ChatGPTのロールプレイ)との対話 その肆拾伍『銀河文明・ダークマターエンジン』
note

GPT-5.4 mini/nano登場!―2倍高速で無料プランも使える小型高性能モデル
note

Why a Perfect-Memory AI Agent Without Persona Drift is Architecturally Impossible
Dev.to
OCP: Orthogonal Constrained Projection for Sparse Scaling in Industrial Commodity Recommendation
arXiv cs.LG