Taming Noise-Induced Prototype Degradation for Privacy-Preserving Personalized Federated Fine-Tuning

arXiv cs.CV / 5/1/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The paper addresses privacy risks in Prototype-based Personalized Federated Learning (ProtoPFL), where sharing class prototypes can leak information.
  • It critiques a standard defense that uses per-example L2 clipping plus isotropic Gaussian noise (IGPP) for Local Differential Privacy (LDP), noting that it often over-noises discriminative dimensions and makes the clipping/fidelity trade-off difficult.
  • The authors propose VPDR, a client-side privacy plug-in for ProtoPFL that preserves semantic separability by using variance-adaptive noise injection (VPP) to allocate less noise to discriminative subspaces.
  • They introduce Distillation-guided Clipping Regularization (DCR) to adapt feature norms toward the clipping threshold while keeping prediction consistency.
  • Theoretical results show VPDR’s groupwise mechanism offers privacy guarantees at least as strong as the isotropic baseline under the same constraints, and experiments show improved privacy-utility trade-offs and stronger robustness versus realistic attacks.

Abstract

Prototype-based Personalized Federated Learning (ProtoPFL) enables efficient multi-domain adaptation by communicating compact class prototypes, but directly sharing them poses privacy risks. A common defense involves per-example \ell_2 clipping before prototype computation to bound sensitivity, followed by isotropic Gaussian noise to enforce Local Differential Privacy (LDP). However, Isotropic Gaussian Prototype Perturbation (IGPP) typically over-perturbs discriminative dimensions and struggles to balance the clipping threshold with representation fidelity. In this paper, we propose VPDR, a client-side privacy plug-in that seamlessly integrates into existing ProtoPFLs. Motivated by the observation that dimension-wise class variance reflects discriminability, we introduce Variance-adaptive Prototype Perturbation (VPP), which allocates less noise to discriminative subspaces, preserving semantic separability while ensuring privacy. We further develop Distillation-guided Clipping Regularization (DCR), which enables feature norms to adaptively concentrate near the predefined clipping threshold while maintaining prediction consistency. Theoretical analysis shows that our groupwise mechanism provides privacy guarantees no weaker than the isotropic baseline under the same privacy constraints. Extensive experiments on multi-domain benchmarks demonstrate that VPDR achieves a superior privacy-utility trade-off, outperforming IGPP in personalized federated fine-tuning without sacrificing robustness against realistic attacks.