Beyond Semantic Priors: Mitigating Optimization Collapse for Generalizable Visual Forensics
arXiv cs.CV / 3/26/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies a failure mode called “Optimization Collapse” in visual forensics detectors (built on Sharpness-Aware Minimization, SAM) where performance degrades to near-random guessing on non-semantic deepfakes when the perturbation radius grows beyond a narrow threshold.
- It introduces the Critical Optimization Radius (COR) to formalize geometric stability of the optimization landscape and the Gradient Signal-to-Noise Ratio (GSNR) to estimate intrinsic generalization potential.
- Theoretical results show COR increases monotonically with GSNR, linking the collapse to layer-wise attenuation of gradient fidelity rather than to perturbation size alone.
- Instead of only shrinking perturbation radius (which stabilizes training but doesn’t fix intrinsic generalization), the authors propose CoRIT, which uses a contrastive gradient proxy plus training-free mechanisms for region refinement, signal preservation, and hierarchical representation integration.
- Experiments report that CoRIT mitigates Optimization Collapse and improves state-of-the-art generalization on cross-domain and universal forgery benchmarks.
Related Articles
Regulating Prompt Markets: Securities Law, Intellectual Property, and the Trading of Prompt Assets
Dev.to
Mercor competitor Deccan AI raises $25M, sources experts from India
Dev.to
How We Got Local MCP Servers Working in Claude Cowork (The Missing Guide)
Dev.to
How Should Students Document AI Usage in Academic Work?
Dev.to

I asked my AI agent to design a product launch image. Here's what came back.
Dev.to