Text-Guided Multimodal Unified Industrial Anomaly Detection

arXiv cs.CV / 4/28/2026

📰 NewsModels & Research

Key Points

  • The paper introduces a text-semantics-guided multimodal framework for industrial anomaly detection using RGB-3D data to overcome issues in existing unsupervised approaches.
  • It proposes a Geometry-Aware Cross-Modal Mapper to better preserve geometric structure when converting between modalities and an Object-Conditioned Textual Feature Adaptor to inject semantic priors.
  • The work also presents a unified learning paradigm that removes the usual one-model-one-class constraint, allowing a single model to detect anomalies across diverse classes.
  • Experiments on the MVTec 3D-AD and Eyecandies datasets show state-of-the-art performance for both anomaly classification and localization in unsupervised settings.

Abstract

Industrial anomaly detection based on RGB-3D multimodal data has emerged as a mainstream paradigm for intelligent quality inspection. However, existing unsupervised methods suffer from two critical limitations: ambiguous cross-modal alignment caused by the lack of high-level semantic guidance and insufficient geometric modeling for RGB-to-3D feature mapping. To address these issues, we propose a unified multimodal industrial anomaly detection framework guided by text semantics. The framework consists of two core modules: a Geometry-Aware Cross-Modal Mapper to preserve geometric structure during modality conversion, and an Object-Conditioned Textual Feature Adaptor to align multimodal features with semantic priors. Furthermore, we establish a unified learning paradigm for multimodal industrial anomaly detection, which breaks the one-model-one-class constraint and enables accurate anomaly detection across diverse classes using a single model. Extensive experiments on the MVTec 3D-AD and Eyecandies datasets demonstrate that our method achieves state-of-the-art performance in classification and localization under unsupervised settings.