Continual Visual Anomaly Detection on the Edge: Benchmark and Efficient Solutions
arXiv cs.CV / 4/9/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces a benchmark for visual anomaly detection (VAD) that jointly evaluates edge deployment constraints and continual learning requirements, focusing on memory footprint, inference cost, and detection performance trade-offs.
- It presents the first comprehensive edge-and-continual-learning VAD benchmark by evaluating seven VAD models across three lightweight backbone architectures, showing that solutions tuned for one constraint can fail when both are combined.
- It proposes Tiny-Dinomaly, a lightweight adaptation of the Dinomaly model based on DINO, achieving substantially reduced memory (13x) and compute (20x) while improving Pixel F1 by 5 percentage points.
- It also introduces targeted efficiency-focused modifications to PatchCore and PaDiM to make them better suited to the continual learning setting.
Related Articles

Black Hat Asia
AI Business

OpenAI's pricing is about to change — here's why local AI matters more than ever
Dev.to

Google AI Tells Users to Put Glue on Their Pizza!
Dev.to

Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to

Could it be that this take is not too far fetched?
Reddit r/LocalLLaMA