QVAD: A Question-Centric Agentic Framework for Efficient and Training-Free Video Anomaly Detection

arXiv cs.CV / 4/6/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces QVAD, a question-centric agentic framework for training-free video anomaly detection that replaces static prompts with an iterative dialogue between an LLM and a VLM.
  • QVAD uses “prompt-updating” based on visual context so that smaller VLMs can generate high-fidelity captions and more precise semantic reasoning without updating model parameters.
  • The approach is reported to reach state-of-the-art performance on multiple benchmarks (UCF-Crime, XD-Violence, and UBNormal) while using a fraction of the parameters compared with competing methods.
  • QVAD is also claimed to generalize well to the single-scene ComplexVAD dataset, indicating robustness beyond the training/testing setup.
  • The framework is presented as fast at inference with low memory usage, targeting deployment on resource-constrained edge devices.

Abstract

Video Anomaly Detection (VAD) is a fundamental challenge in computer vision, particularly due to the open-set nature of anomalies. While recent training-free approaches utilizing Vision-Language Models (VLMs) have shown promise, they typically rely on massive, resource-intensive foundation models to compensate for the ambiguity of static prompts. We argue that the bottleneck in VAD is not necessarily model capacity, but rather the static nature of inquiry. We propose QVAD, a question-centric agentic framework that treats VLM-LLM interaction as a dynamic dialogue. By iteratively refining queries based on visual context, our LLM agent guides smaller VLMs to produce high-fidelity captions and precise semantic reasoning without parameter updates. This ``prompt-updating" mechanism effectively unlocks the latent capabilities of lightweight models, enabling state-of-the-art performance on UCF-Crime, XD-Violence, and UBNormal using a fraction of the parameters required by competing methods. We further demonstrate exceptional generalizability on the single-scene ComplexVAD dataset. Crucially, QVAD achieves high inference speeds with minimal memory footprints, making advanced VAD capabilities deployable on resource-constrained edge devices.