Scale-Aware Adversarial Analysis: A Diagnostic for Generative AI in Multiscale Complex Systems
arXiv cs.LG / 5/4/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper argues that current explainable-AI methods for generative models—especially perturbation and gradient-saliency approaches—use pixel-wise changes that can create unphysical artifacts and move inputs outside the valid data distribution.
- It introduces a scale-aware diagnostic framework based on Constrained Diffusion Decomposition (CDD), which performs physically constrained multiscale data decomposition and evaluation using diffusion-driven, scale-aware modifications.
- Using this framework with a DDPM, the authors apply deterministic interventions in a continuous CDD-derived scale space rather than relying on unphysical input perturbations.
- Results show that under moderate physical perturbations, an unconstrained generative model can exhibit localized structural freezing and nonlinear instability, failing to preserve cross-scale continuity and diverging when exposed to unseen physical states.
- The authors claim the method provides a controlled physical testbed for probing algorithmic vulnerabilities and guiding future generative architectures to better respect multiscale causality.
Related Articles
AnnouncementsBuilding a new enterprise AI services company with Blackstone, Hellman & Friedman, and Goldman Sachs
Anthropic News

Dara Khosrowshahi on replacing Uber drivers — and himself — with AI
The Verge

CLMA Frame Test
Dev.to

You Are Right — You Don't Need CLAUDE.md
Dev.to

Governance and Liability in AI Agents: What I Built Trying to Answer Those Questions
Dev.to