Abstract
As machine learning (ML) systems expand in both scale and functionality, the security landscape has become increasingly complex, with a proliferation of attacks and defenses. However, existing studies largely treat these threats in isolation, lacking a coherent framework to expose their shared principles and interdependencies. This fragmented view hinders systematic understanding and limits the design of comprehensive defenses. Crucially, the two foundational assets of ML -- \textbf{data} and \textbf{models} -- are no longer independent; vulnerabilities in one directly compromise the other. The absence of a holistic framework leaves open questions about how these bidirectional risks propagate across the ML pipeline. To address this critical gap, we propose a \emph{unified closed-loop threat taxonomy} that explicitly frames model-data interactions along four directional axes. Our framework offers a principled lens for analyzing and defending foundation models. The resulting four classes of security threats represent distinct but interrelated categories of attacks: (1) Data\rightarrowData (D\rightarrowD): including \emph{data decryption attacks and watermark removal attacks}; (2) Data\rightarrowModel (D\rightarrowM): including \emph{poisoning, harmful fine-tuning attacks, and jailbreak attacks}; (3) Model\rightarrowData (M\rightarrowD): including \emph{model inversion, membership inference attacks, and training data extraction attacks}; (4) Model\rightarrowModel (M\rightarrowM): including \emph{model extraction attacks}. Our unified framework elucidates the underlying connections among these security threats and establishes a foundation for developing scalable, transferable, and cross-modal security strategies, particularly within the landscape of foundation models.