Mechanistic Anomaly Detection via Functional Attribution

arXiv cs.LG / 4/22/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper reframes mechanistic anomaly detection (MAD) as a functional attribution problem that checks how well outputs can be explained by samples from a trusted reference set, where attribution failure indicates anomalous internal behavior.
  • It implements this idea using influence functions to measure functional coupling between test samples and a small reference set through parameter-space sampling.
  • Experiments across multiple anomaly types and modalities show strong results for vision backdoors, achieving state-of-the-art performance on BackdoorBench with an average Defense Effectiveness Rating (DER) of 0.93.
  • For LLMs, the method improves detection over baselines across several backdoor types, including models that are explicitly obfuscated, and it also detects adversarial and out-of-distribution inputs while distinguishing different anomalous mechanisms within one model.

Abstract

We can often verify the correctness of neural network outputs using ground truth labels, but we cannot reliably determine whether the output was produced by normal or anomalous internal mechanisms. Mechanistic anomaly detection (MAD) aims to flag these cases, but existing methods either depend on latent space analysis, which is vulnerable to obfuscation, or are specific to particular architectures and modalities. We reframe MAD as a functional attribution problem: asking to what extent samples from a trusted set can explain the model's output, where attribution failure signals anomalous behavior. We operationalize this using influence functions, measuring functional coupling between test samples and a small reference set via parameter-space sampling. We evaluate across multiple anomaly types and modalities. For backdoors in vision models, our method achieves state-of-the-art detection on BackdoorBench, with an average Defense Effectiveness Rating (DER) of 0.93 across seven attacks and four datasets (next best 0.83). For LLMs, we similarly achieve a significant improvement over baselines for several backdoor types, including on explicitly obfuscated models. Beyond backdoors, our method can detect adversarial and out-of-distribution samples, and distinguishes multiple anomalous mechanisms within a single model. Our results establish functional attribution as an effective, modality-agnostic tool for detecting anomalous behavior in deployed models.