Contract And Conquer: How to Provably Compute Adversarial Examples for a Black-Box Model?
arXiv cs.LG / 3/12/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes Contract And Conquer (CAC), a method to provably compute adversarial examples for neural networks in a black-box setting.
- CAC uses knowledge distillation on an expanding distillation dataset and a precise contraction of the adversarial search space to enable provable guarantees.
- The authors prove a transferability guarantee: CAC can produce an adversarial example for the black-box model within a fixed number of iterations.
- Experiments on ImageNet, including vision transformers, show CAC outperforms existing black-box attack methods.
Related Articles
Day 10: 230 Sessions of Hustle and It Comes Down to One Person Reading a Document
Dev.to

5 Dangerous Lies Behind Viral AI Coding Demos That Break in Production
Dev.to
Two bots, one confused server: what Nimbus revealed about AI agent identity
Dev.to

OpenTelemetry just standardized LLM tracing. Here's what it actually looks like in code.
Dev.to
PIXIU: A Large Language Model, Instruction Data and Evaluation Benchmark forFinance
Dev.to