PoiCGAN: A Targeted Poisoning Based on Feature-Label Joint Perturbation in Federated Learning
arXiv cs.LG / 3/26/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper introduces PoiCGAN, a targeted poisoning attack for federated learning that uses feature–label joint perturbations to compromise industrial image classification models without triggering common anomaly defenses.
- PoiCGAN is built on a conditional GAN whose generator/discriminator input modifications guide training to produce poisoned samples while also automatically performing label flipping.
- Experiments on multiple datasets show an attack success rate improvement of 83.97% over baseline poisoning methods while keeping main-task accuracy degradation under 8.87%.
- The authors report that both the crafted poisoned samples and resulting malicious models are highly stealthy, making them harder to detect and remove during model performance tests or anomaly-based defenses.
Related Articles
5 Signs Your Consulting Firm Needs AI Agents (Not More Staff)
Dev.to
AgentDesk vs Hiring Another Consultant: A Cost Comparison
Dev.to
"Why Your AI Agent Needs a System 1"
Dev.to
When should we expect TurboQuant?
Reddit r/LocalLLaMA
AI as Your Customs Co-Pilot: Automating HS Code Chaos in Southeast Asia
Dev.to