Reasoning-Driven Anomaly Detection and Localization with Image-Level Supervision

arXiv cs.CV / 3/31/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • 提案手法ReALは、MLLMの推論過程から異常関連トークンを抽出し、そのattention応答を集約して画素レベルの異常マップを生成する枠組みを示している。
  • Consistency-Guided Reasoning Optimization (CGRO) は強化学習で推論トークンと視覚attentionの整合を高め、より首尾一貫した推論と局在精度の向上を狙っている。
  • 画素レベルの教師や外部の補助視覚モジュールに頼らず、画像レベルの教師のみで異常検出・局在・解釈可能な推論を行えることを主張している。
  • 4つの公開ベンチマークで、検出・局在・解釈性が大幅に改善され、画素レベル教師ありで学習したMLLMベース手法に競合する性能を示したとしている。

Abstract

Multimodal large language models (MLLMs) have recently demonstrated remarkable reasoning and perceptual abilities for anomaly detection. However, most approaches remain confined to image-level anomaly detection and textual reasoning, while pixel-level localization still relies on external vision modules and dense annotations. In this work, we activate the intrinsic reasoning potential of MLLMs to perform anomaly detection, pixel-level localization, and interpretable reasoning solely from image-level supervision, without any auxiliary components or pixel-wise labels. Specifically, we propose Reasoning-Driven Anomaly Localization (ReAL), which extracts anomaly-related tokens from the autoregressive reasoning process and aggregates their attention responses to produce pixel-level anomaly maps. We further introduce a Consistency-Guided Reasoning Optimization (CGRO) module that leverages reinforcement learning to align reasoning tokens with visual attentions, resulting in more coherent reasoning and accurate anomaly localization. Extensive experiments on four public benchmarks demonstrate that our method significantly improves anomaly detection, localization, and interpretability. Remarkably, despite relying solely on image-level supervision, our approach achieves performance competitive with MLLM-based methods trained under dense pixel-level supervision. Code is available at https://github.com/YizhouJin313/ReADL.