Revisiting Label Inference Attacks in Vertical Federated Learning: Why They Are Vulnerable and How to Defend
arXiv cs.LG / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies label inference attacks in vertical federated learning and shows that LIAs remain a vulnerability even when bottom models focus on feature extraction, using mutual information to reveal a 'model compensation' phenomenon.
- It proves that in VFL, the mutual information between layer outputs and labels grows with layer depth, indicating that the top model is responsible for mapping labels while bottom models primarily extract features.
- The authors introduce task reassignment to break the distribution alignment between features and labels, showing that disrupting this alignment significantly reduces LIA success.
- They propose a zero-overhead defense based on layer adjustment, shifting cut layers forward to increase the share of top-model layers and thereby improve resistance to LIAs and possibly other defenses.
- Extensive experiments across five datasets and model architectures validate the defense and highlight practical implications for secure VFL deployment.
Related Articles
The massive shift toward edge computing and local processing
Dev.to
Self-Refining Agents in Spec-Driven Development
Dev.to
Week 3: Why I'm Learning 'Boring' ML Before Building with LLMs
Dev.to
The Three-Agent Protocol Is Transferable. The Discipline Isn't.
Dev.to

has anyone tried this? Flash-MoE: Running a 397B Parameter Model on a Laptop
Reddit r/LocalLLaMA