Latent Anomaly Knowledge Excavation: Unveiling Sparse Sensitive Neurons in Vision-Language Models

arXiv cs.CV / 4/10/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that vision-language models already contain anomaly-detection capability, but it is latent and only sparsely activated within a small set of anomaly-sensitive neurons.
  • It introduces a training-free method called Latent Anomaly Knowledge Excavation (LAKE) that uses only a minimal set of normal samples to identify and elicit those critical neuronal signals.
  • LAKE produces a compact “normality representation” that links visual structural deviations with cross-modal semantic activations for anomaly detection.
  • Experiments on industrial anomaly detection benchmarks reportedly achieve state-of-the-art results while also offering neuron-level interpretability.
  • The authors propose a shift in perspective from learning downstream anomaly modules to activating targeted latent knowledge already embedded in pre-trained VLMs.

Abstract

Large-scale vision-language models (VLMs) exhibit remarkable zero-shot capabilities, yet the internal mechanisms driving their anomaly detection (AD) performance remain poorly understood. Current methods predominantly treat VLMs as black-box feature extractors, assuming that anomaly-specific knowledge must be acquired through external adapters or memory banks. In this paper, we challenge this assumption by arguing that anomaly knowledge is intrinsically embedded within pre-trained models but remains latent and under-activated. We hypothesize that this knowledge is concentrated within a sparse subset of anomaly-sensitive neurons. To validate this, we propose latent anomaly knowledge excavation (LAKE), a training-free framework that identifies and elicits these critical neuronal signals using only a minimal set of normal samples. By isolating these sensitive neurons, LAKE constructs a highly compact normality representation that integrates visual structural deviations with cross-modal semantic activations. Extensive experiments on industrial AD benchmarks demonstrate that LAKE achieves state-of-the-art performance while providing intrinsic, neuron-level interpretability. Ultimately, our work advocates for a paradigm shift: redefining anomaly detection as the targeted activation of latent pre-trained knowledge rather than the acquisition of a downstream task.