Meta-Learning and Targeted Differential Privacy to Improve the Accuracy-Privacy Trade-off in Recommendations

arXiv cs.LG / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper tackles the fundamental trade-off in privacy-preserving recommender systems: adding differential privacy (DP) noise can substantially reduce recommendation accuracy.
  • It introduces “targeted DP,” which applies DP noise only to the most stereotype-like or sensitive-revealing user data (e.g., gender or age) to avoid unnecessary perturbation.
  • At the model level, it uses meta-learning to make the recommender more robust to the remaining DP-induced noise.
  • The authors report improved accuracy-privacy trade-offs versus standard methods, including uniformly applied DP and full-DP baselines, with lower empirical privacy risk.
  • Overall, the work suggests a combined approach—selective DP at the data layer plus meta-learning at the model layer—can better balance user privacy and recommendation performance.

Abstract

Balancing differential privacy (DP) with recommendation accuracy is a key challenge in privacy-preserving recommender systems, since DP-noise degrades accuracy. We address this trade-off at both the data and model levels. At the data level, we apply DP only to the most stereotypical user data likely to reveal sensitive attributes, such as gender or age, to reduce unnecessary perturbation; we refer to this as targeted DP. At the model level, we use meta-learning to improve robustness to remaining DP-noise. This achieves a better trade-off between accuracy and privacy than standard approaches: Meta-learning improves accuracy and targeted DP leads to lower empirical privacy risk compared to uniformly applied DP and full DP baselines. Overall, our findings show that selectively applying DP at the data level together with meta-learning at the model level can effectively balance recommendation accuracy and user privacy.