AI Navigate

Informationally Compressive Anonymization: Non-Degrading Sensitive Input Protection for Privacy-Preserving Supervised Machine Learning

arXiv cs.LG / 3/18/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • It introduces Informationally Compressive Anonymization (ICA) and the VEIL architecture as a privacy-preserving ML framework that avoids performance degradation by relying on architectural and mathematical design rather than noise injection or cryptography.
  • ICA embeds a supervised, multi-objective encoder inside a trusted Source Environment to transform raw inputs into low-dimensional, task-aligned latent representations that are irreversibly anonymized before leaving the trusted zone.
  • The authors provide rigorous proofs showing the encodings are structurally non-invertible using topological and information-theoretic arguments, making inversion logically impossible and keeping reconstruction probability vanishing under realistic attacker assumptions.
  • Unlike prior autoencoder-based ppML approaches, ICA preserves predictive utility by aligning representation learning with downstream supervised objectives and avoids gradient clipping, noise budgets, or encryption at inference time.
  • The VEIL architecture enforces strict trust boundaries, supports scalable multi-region deployment, and aligns with privacy-by-design and post-quantum threat considerations, establishing a new enterprise ML foundation that is secure, performant, and safe by construction.

Abstract

Modern machine learning systems increasingly rely on sensitive data, creating significant privacy, security, and regulatory risks that existing privacy-preserving machine learning (ppML) techniques, such as Differential Privacy (DP) and Homomorphic Encryption (HE), address only at the cost of degraded performance, increased complexity, or prohibitive computational overhead. This paper introduces Informationally Compressive Anonymization (ICA) and the VEIL architecture, a privacy-preserving ML framework that achieves strong privacy guarantees through architectural and mathematical design rather than noise injection or cryptography. ICA embeds a supervised, multi-objective encoder within a trusted Source Environment to transform raw inputs into low-dimensional, task-aligned latent representations, ensuring that only irreversibly anonymized vectors are exported to untrusted Training and Inference Environments. The paper rigorously proves that these encodings are structurally non-invertible using topological and information-theoretic arguments, showing that inversion is logically impossible, even under idealized attacker assumptions, and that, in realistic deployments, the attackers conditional entropy over the original data diverges, driving reconstruction probability to zero. Unlike prior autoencoder-based ppML approaches, ICA preserves predictive utility by aligning representation learning with downstream supervised objectives, enabling low-latency, high-performance ML without gradient clipping, noise budgets, or encryption at inference time. The VEIL architecture enforces strict trust boundaries, supports scalable multi-region deployment, and naturally aligns with privacy-by-design regulatory frameworks, establishing a new foundation for enterprise ML that is secure, performant, and safe by construction, even in the face of post-quantum threats.