VADMamba++: Efficient Video Anomaly Detection via Hybrid Modeling in Grayscale Space
arXiv cs.CV / 4/2/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces VADMamba++, a new video anomaly detection approach that removes reliance on optical flow and targets a single proxy task using only frame-level inputs.
- It applies a Gray-to-RGB paradigm by learning a single-channel-to-three-channel reconstruction mapping, enabling anomalies to be detected via inconsistencies between structural geometry and inferred chromatic cues.
- The method uses a hybrid backbone combining Mamba, CNN, and Transformer modules to model diverse normal patterns while suppressing anomaly appearances.
- It improves accuracy with an intra-task fusion scoring strategy that blends explicit future-frame prediction errors and implicit quantized feature errors.
- Experiments on three benchmark datasets show VADMamba++ outperforms prior state-of-the-art methods while maintaining strong efficiency, particularly under strict single-task settings.
Related Articles

Black Hat Asia
AI Business
v5.5.0
Transformers(HuggingFace)Releases
Bonsai (PrismML's 1 bit version of Qwen3 8B 4B 1.7B) was not an aprils fools joke
Reddit r/LocalLLaMA

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Inference Engines - A visual deep dive into the layers of an LLM
Dev.to