Revisiting Label Inference Attacks in Vertical Federated Learning: Why They Are Vulnerable and How to Defend
arXiv cs.LG / 3/20/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies label inference attacks in vertical federated learning and shows that LIAs remain a vulnerability even when bottom models focus on feature extraction, using mutual information to reveal a 'model compensation' phenomenon.
- It proves that in VFL, the mutual information between layer outputs and labels grows with layer depth, indicating that the top model is responsible for mapping labels while bottom models primarily extract features.
- The authors introduce task reassignment to break the distribution alignment between features and labels, showing that disrupting this alignment significantly reduces LIA success.
- They propose a zero-overhead defense based on layer adjustment, shifting cut layers forward to increase the share of top-model layers and thereby improve resistance to LIAs and possibly other defenses.
- Extensive experiments across five datasets and model architectures validate the defense and highlight practical implications for secure VFL deployment.
Related Articles
Automating the Chase: AI for Festival Vendor Compliance
Dev.to
MCP Skills vs MCP Tools: The Right Way to Configure Your Server
Dev.to
500 AI Prompts Every Content Creator Needs in 2026 (20 Free Samples)
Dev.to
Building a Game for My Daughter with AI — Part 1: What If She Could Build It Too?
Dev.to

Math needs thinking time, everyday knowledge needs memory, and a new Transformer architecture aims to deliver both
THE DECODER