VADMamba++: Efficient Video Anomaly Detection via Hybrid Modeling in Grayscale Space

arXiv cs.CV / 4/2/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces VADMamba++, a new video anomaly detection approach that removes reliance on optical flow and targets a single proxy task using only frame-level inputs.
  • It applies a Gray-to-RGB paradigm by learning a single-channel-to-three-channel reconstruction mapping, enabling anomalies to be detected via inconsistencies between structural geometry and inferred chromatic cues.
  • The method uses a hybrid backbone combining Mamba, CNN, and Transformer modules to model diverse normal patterns while suppressing anomaly appearances.
  • It improves accuracy with an intra-task fusion scoring strategy that blends explicit future-frame prediction errors and implicit quantized feature errors.
  • Experiments on three benchmark datasets show VADMamba++ outperforms prior state-of-the-art methods while maintaining strong efficiency, particularly under strict single-task settings.

Abstract

VADMamba pioneered the introduction of Mamba to Video Anomaly Detection (VAD), achieving high accuracy and fast inference through hybrid proxy tasks. Nevertheless, its heavy reliance on optical flow as auxiliary input and inter-task fusion scoring constrains its applicability to a single proxy task. In this paper, we introduce VADMamba++, an efficient VAD method based on the Gray-to-RGB paradigm that enforces a Single-Channel to Three-Channel reconstruction mapping, designed for a single proxy task and operating without auxiliary inputs. This paradigm compels inferring color appearances from grayscale structures, allowing anomalies to be more effectively revealed through dual inconsistencies between structure and chromatic cues. Specifically, VADMamba++ reconstructs grayscale frames into the RGB space to simultaneously discriminate structural geometry and chromatic fidelity, thereby enhancing sensitivity to explicit visual anomalies. We further design a hybrid modeling backbone that integrates Mamba, CNN, and Transformer modules to capture diverse normal patterns while suppressing the appearance of anomalies. Furthermore, an intra-task fusion scoring strategy integrates explicit future-frame prediction errors with implicit quantized feature errors, further improving accuracy under a single task setting. Extensive experiments on three benchmark datasets demonstrate that VADMamba++ outperforms state-of-the-art methods while meeting performance and efficiency, especially under a strict single-task setting with only frame-level inputs.

VADMamba++: Efficient Video Anomaly Detection via Hybrid Modeling in Grayscale Space | AI Navigate