Latent Anomaly Knowledge Excavation: Unveiling Sparse Sensitive Neurons in Vision-Language Models
arXiv cs.CV / 4/10/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that vision-language models already contain anomaly-detection capability, but it is latent and only sparsely activated within a small set of anomaly-sensitive neurons.
- It introduces a training-free method called Latent Anomaly Knowledge Excavation (LAKE) that uses only a minimal set of normal samples to identify and elicit those critical neuronal signals.
- LAKE produces a compact “normality representation” that links visual structural deviations with cross-modal semantic activations for anomaly detection.
- Experiments on industrial anomaly detection benchmarks reportedly achieve state-of-the-art results while also offering neuron-level interpretability.
- The authors propose a shift in perspective from learning downstream anomaly modules to activating targeted latent knowledge already embedded in pre-trained VLMs.



