AI Navigate

[R] Weekly digest: arXiv AI security papers translated for practitioners -- Cascade (cross-stack CVE+Rowhammer attacks on compound AI), LAMLAD (dual-LLM adversarial ML, 97% evasion), OpenClaw (4 vuln classes in agent frameworks)

Reddit r/MachineLearning / 3/20/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisTools & Practical UsageModels & Research

Key Points

  • The digest translates arXiv AI security papers for practitioners, scoring each paper on Threat Realism, Defensive Urgency, Novelty, and Research Maturity, and classifying them as Act Now, Watch, or Horizon.
  • Cascade (arXiv:2603.12023) demonstrates a cross-stack attack by chaining software CVEs with Rowhammer hardware exploits against compound AI systems, highlighting the full CVE surface across multi-component architectures.
  • OpenClaw (arXiv:2603.12644) identifies four vulnerability classes in autonomous agent frameworks, focusing on execution-layer security gaps that prompt-level filters miss, making it directly relevant to building or deploying agentic systems.
  • LAMLAD (arXiv:2512.21404) shows dual-LLM agents automating feature-level adversarial attacks against Android malware classifiers, achieving a 97% evasion rate and raising concerns about automation lowering the skill barrier for adversarial ML attacks.
I have been building a bi-weekly digest that takes AI security papers from arXiv and translates them into practitioner-oriented intelligence. Each paper gets rated on four dimensions: Threat Realism, Defensive Urgency, Novelty, and Research Maturity (1-5 scale), then classified as Act Now / Watch / Horizon based on how quickly defenders need to respond. The first issue covers three papers: **Cascade (arXiv:2603.12023) -- Act Now** Demonstrates compound attacks that chain software CVEs with hardware-level exploits (Rowhammer) against compound AI systems. The key insight is that multi-component AI architectures inherit the full CVE surface of every component, and attackers can compose gadgets across the software-hardware boundary. Rated 5/5 on Novelty -- this cross-stack attack composition hasn't been explored systematically before. **OpenClaw (arXiv:2603.12644) -- Act Now** Identifies four vulnerability classes in autonomous agent frameworks through a case study of OpenClaw. Focuses on execution-layer security gaps that prompt-level filters completely miss. If you are building or deploying agentic systems, this is directly relevant -- the attack surface is in the tool-use layer, not the prompt layer. **LAMLAD (arXiv:2512.21404) -- Watch** Uses dual-LLM agents to automate feature-level adversarial attacks against Android malware classifiers. Achieves 97% evasion rate. The concern is the automation angle -- this lowers the skill barrier for adversarial ML attacks substantially. Every claim links back to the source arXiv paper. We use a [VERIFY] tag system for anything that could not be directly confirmed against the source material. First issue: https://raxe.ai/labs/radar/radar-2026-001 Full archive with structured metadata: https://raxe.ai/labs/radar No paywall, no signup. 
submitted by /u/cyberamyntas
[link] [comments]