Benchmarking Compact VLMs for Clip-Level Surveillance Anomaly Detection Under Weak Supervision
arXiv cs.CV / 3/17/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper benchmarks compact vision-language models (VLMs) for clip-level surveillance anomaly detection under weak supervision, standardizing preprocessing, prompting, dataset splits, metrics, and runtime settings.
- It compares parameter-efficiently adapted compact VLMs against training-free VLM pipelines and weakly supervised baselines across accuracy, precision, recall, F1, ROC-AUC, and per-clip latency.
- Results show compact VLMs can match or exceed established approaches while maintaining competitive per-clip latency and reduced prompt sensitivity under the unified protocol.
- The study demonstrates that parameter-efficient fine-tuning enables compact VLMs to serve as dependable clip-level anomaly detectors with a favorable accuracy-efficiency trade-off in a transparent experimental setup.




