Benchmarking Compact VLMs for Clip-Level Surveillance Anomaly Detection Under Weak Supervision
arXiv cs.CV / 3/17/2026
📰 NewsSignals & Early TrendsModels & Research
Key Points
- The paper benchmarks compact vision-language models (VLMs) for clip-level surveillance anomaly detection under weak supervision, standardizing preprocessing, prompting, dataset splits, metrics, and runtime settings.
- It compares parameter-efficiently adapted compact VLMs against training-free VLM pipelines and weakly supervised baselines across accuracy, precision, recall, F1, ROC-AUC, and per-clip latency.
- Results show compact VLMs can match or exceed established approaches while maintaining competitive per-clip latency and reduced prompt sensitivity under the unified protocol.
- The study demonstrates that parameter-efficient fine-tuning enables compact VLMs to serve as dependable clip-level anomaly detectors with a favorable accuracy-efficiency trade-off in a transparent experimental setup.
Related Articles
I Was Wrong About AI Coding Assistants. Here's What Changed My Mind (and What I Built About It).
Dev.to

Interesting loop
Reddit r/LocalLLaMA
Qwen3.5-122B-A10B Uncensored (Aggressive) — GGUF Release + new K_P Quants
Reddit r/LocalLLaMA
FeatherOps: Fast fp8 matmul on RDNA3 without native fp8
Reddit r/LocalLLaMA

VerityFlow-AI: Engineering a Multi-Agent Swarm for Real-Time Truth-Validation and Deep-Context Media Synthesis
Dev.to