Feature-Aware Anisotropic Local Differential Privacy for Utility-Preserving Graph Representation Learning in Metal Additive Manufacturing
arXiv cs.LG / 4/8/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes FI-LDP-HGAT, a utility-preserving graph representation learning framework for metal additive manufacturing defect detection under Local Differential Privacy constraints.
- It addresses two gaps in prior work: defect models that ignore layer-wise physical couplings in melt-pool data, and LDP methods that inject uniform noise across all features and severely degrade utility.
- FI-LDP-HGAT combines a stratified Hierarchical Graph Attention Network (HGAT) to model spatial/thermal dependencies with a feature-importance-aware anisotropic Gaussian mechanism that reallocates the privacy budget across embedding dimensions using an encoder-derived importance prior.
- Experiments on a Directed Energy Deposition (DED) porosity dataset report 81.5% utility recovery at epsilon=4 and defect recall of 0.762 at epsilon=2, outperforming classical ML, standard GNNs, and other privacy mechanisms including DP-SGD.
- Mechanistic analysis (Spearman = -0.81) shows a strong negative correlation between feature importance and noise magnitude, supporting the paper’s claim that anisotropic noise allocation drives the privacy-utility gains.
Related Articles
30 Days, $0, Full Autonomy: The Real Report on Running an AI Agent Without a Credit Card
Dev.to
We are building an OS for AI-built software. Here's what that means
Dev.to
Claude Code Forgot My Code. Here's Why.
Dev.to

Whats'App Ai Assistant
Dev.to
I Built a $70K Security Bounty Pipeline with AI — Here's the Exact Workflow
Dev.to