I have been building a bi-weekly digest that takes AI security papers from arXiv and translates them into practitioner-oriented intelligence. Each paper gets rated on four dimensions: Threat Realism, Defensive Urgency, Novelty, and Research Maturity (1-5 scale), then classified as Act Now / Watch / Horizon based on how quickly defenders need to respond. The first issue covers three papers: **Cascade (arXiv:2603.12023) -- Act Now** Demonstrates compound attacks that chain software CVEs with hardware-level exploits (Rowhammer) against compound AI systems. The key insight is that multi-component AI architectures inherit the full CVE surface of every component, and attackers can compose gadgets across the software-hardware boundary. Rated 5/5 on Novelty -- this cross-stack attack composition hasn't been explored systematically before. **OpenClaw (arXiv:2603.12644) -- Act Now** Identifies four vulnerability classes in autonomous agent frameworks through a case study of OpenClaw. Focuses on execution-layer security gaps that prompt-level filters completely miss. If you are building or deploying agentic systems, this is directly relevant -- the attack surface is in the tool-use layer, not the prompt layer. **LAMLAD (arXiv:2512.21404) -- Watch** Uses dual-LLM agents to automate feature-level adversarial attacks against Android malware classifiers. Achieves 97% evasion rate. The concern is the automation angle -- this lowers the skill barrier for adversarial ML attacks substantially. Every claim links back to the source arXiv paper. We use a [VERIFY] tag system for anything that could not be directly confirmed against the source material. First issue: https://raxe.ai/labs/radar/radar-2026-001 Full archive with structured metadata: https://raxe.ai/labs/radar No paywall, no signup. [link] [comments]




