Membership Inference Attacks Expose Participation Privacy in ECG Foundation Encoders
arXiv cs.LG / 4/14/2026
📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- Self-supervised “foundation” ECG encoders are being reused across tasks and institutions, but this reuse can leak participation privacy through model outputs or latent embeddings even when raw waveforms and labels are withheld.
- The paper presents an audit of membership inference attacks against multiple ECG foundation encoder types, including contrastive methods (SimCLR, TS2Vec) and masked reconstruction (CNN- and Transformer-based MAE).
- It evaluates three attacker models based on realistic interfaces—score-only black-box scalar outputs, adaptive learned attackers using repeated queries, and embedding-access attackers probing representation geometry.
- Results show participation leakage varies by objective and is strongest for small or institution-specific cohorts, while larger and more diverse pretraining datasets reduce tail risk.
- The authors conclude that limiting access to raw signals or diagnostic labels is not sufficient for participation privacy, requiring deployment-aware, interface-specific auditing for connected-health systems.
Related Articles

Black Hat Asia
AI Business
Big Tech firms are accelerating AI investments and integration, while regulators and companies focus on safety and responsible adoption.
Dev.to
Don't forget, there is more than forgetting: new metrics for Continual Learning
Dev.to
Microsoft MAI-Image-2-Efficient Review 2026: The AI Image Model Built for Production Scale
Dev.to
Bit of a strange question?
Reddit r/artificial