SNEAKDOOR: Stealthy Backdoor Attacks against Distribution Matching-based Dataset Condensation
arXiv cs.AI / 4/1/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces “Sneakdoor,” a new stealth-focused backdoor attack targeting distribution matching-based dataset condensation methods.
- Sneakdoor achieves imperceptibility by exploiting vulnerabilities near class decision boundaries and using a generative module to create input-aware triggers aligned with local feature geometry.
- The approach is designed to preserve a strong balance between attack success rate, clean test accuracy, and stealthiness, including reducing detectability in both synthetic condensed data and triggered inference samples.
- Experiments across multiple datasets show Sneakdoor substantially improves “invisibility” while maintaining high attack efficacy.
- The authors provide an implementation repository for reproducibility and further study of the attack method.
💡 Insights using this article
This article is featured in our daily AI news digest — key takeaways and action items at a glance.
Related Articles

Knowledge Governance For The Agentic Economy.
Dev.to

AI server farms heat up the neighborhood for miles around, paper finds
The Register
Does the Claude “leak” actually change anything in practice?
Reddit r/LocalLLaMA

87.4% of My Agent's Decisions Run on a 0.8B Model
Dev.to

AIエージェントをソフトウェアチームに変える無料ツール「Paperclip」
Dev.to