PoiCGAN: A Targeted Poisoning Based on Feature-Label Joint Perturbation in Federated Learning
arXiv cs.LG / 2026/3/26
📰 ニュースSignals & Early TrendsIdeas & Deep AnalysisModels & Research
要点
- The paper introduces PoiCGAN, a targeted poisoning attack for federated learning that uses feature–label joint perturbations to compromise industrial image classification models without triggering common anomaly defenses.
- PoiCGAN is built on a conditional GAN whose generator/discriminator input modifications guide training to produce poisoned samples while also automatically performing label flipping.
- Experiments on multiple datasets show an attack success rate improvement of 83.97% over baseline poisoning methods while keeping main-task accuracy degradation under 8.87%.
- The authors report that both the crafted poisoned samples and resulting malicious models are highly stealthy, making them harder to detect and remove during model performance tests or anomaly-based defenses.



