Model Privacy: A Unified Framework for Understanding Model Stealing Attacks and Defenses

arXiv stat.ML / 4/7/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a unified theoretical framework, “Model Privacy,” to systematically analyze model stealing attacks against ML models accessed via limited query-response interfaces.
  • It formalizes the threat model and attack/defense objectives, and proposes metrics to quantify how effective different attack and defense strategies are.
  • The authors study fundamental tradeoffs between model utility and privacy, providing guidance on how security measures impact performance.
  • A key insight is that effective defenses depend on the attack-specific structure of perturbations, suggesting defenses should be tailored to attacker behavior.
  • The framework is validated through experiments across multiple learning scenarios from a defender’s perspective, showing that defenses designed under the proposed theory work well in practice.

Abstract

The use of machine learning (ML) has become increasingly prevalent in various domains, highlighting the importance of understanding and ensuring its safety. One pressing concern is the vulnerability of ML applications to model stealing attacks. These attacks involve adversaries attempting to recover a learned model through limited query-response interactions, such as those found in cloud-based services or on-chip artificial intelligence interfaces. While existing literature proposes various attack and defense strategies, these often lack a theoretical foundation and standardized evaluation criteria. In response, this work presents a framework called ``Model Privacy'', providing a foundation for comprehensively analyzing model stealing attacks and defenses. We establish a rigorous formulation for the threat model and objectives, propose methods to quantify the goodness of attack and defense strategies, and analyze the fundamental tradeoffs between utility and privacy in ML models. Our developed theory offers valuable insights into enhancing the security of ML models, especially highlighting the importance of the attack-specific structure of perturbations for effective defenses. We demonstrate the application of model privacy from the defender's perspective through various learning scenarios. Extensive experiments corroborate the insights and the effectiveness of defense mechanisms developed under the proposed framework.