Differentially Private Model Merging

arXiv cs.LG / 4/24/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents methods to merge multiple already-trained models to produce a single model that meets an arbitrary target differential privacy (DP) requirement without any extra training steps during deployment or inference.
  • It proposes two post-processing merging techniques—random selection of models and linear combination—to generate a final private model for a desired privacy parameter.
  • The authors provide privacy guarantees using Rényi Differential Privacy (RDP) and analyze privacy loss distributions for broad problem settings.
  • In a private mean estimation case study, they derive a full privacy/utility characterization and theoretically show linear combination is superior to random selection.
  • Experiments on multiple models and both synthetic and real-world datasets empirically validate the proposed approach and the analysis.

Abstract

In machine learning applications, privacy requirements during inference or deployment time could change constantly due to varying policies, regulations, or user experience. In this work, we aim to generate a magnitude of models to satisfy any target differential privacy (DP) requirement without additional training steps, given a set of existing models trained on the same dataset with different privacy/utility tradeoffs. We propose two post processing techniques, namely random selection and linear combination, to output a final private model for any target privacy parameter. We provide privacy accounting of these approaches from the lens of R'enyi DP and privacy loss distributions for general problems. In a case study on private mean estimation, we fully characterize the privacy/utility results and theoretically establish the superiority of linear combination over random selection. Empirically, we validate our approach and analyses on several models and both synthetic and real-world datasets.