Uncovering Memorization in Timeseries Imputation models: LBRM Membership Inference and its link to attribute Leakage
arXiv cs.LG / 3/26/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper studies privacy risks in deep learning time series imputation models, showing that they can be attacked via black-box inference despite prior focus on memorization in generative models.
- It introduces a two-stage framework that first performs a membership inference attack using a reference model to improve detection accuracy, including against models that resist overfitting-based attacks.
- It also presents what it claims is the first attribute inference attack for time series imputation, predicting sensitive characteristics of training data.
- Experiments across attention-based and autoencoder architectures (trained from scratch and fine-tuned with access to initial weights) show the membership attack retrieves a significant portion of training data, outperforming a naive baseline at tpr@top25%.
- The authors find the membership attack can predict whether attribute inference will be effective, achieving higher precision (90% vs 78% in general), linking memorization behavior to attribute leakage.
Related Articles
Speaking of VoxtralResearchVoxtral TTS: A frontier, open-weights text-to-speech model that’s fast, instantly adaptable, and produces lifelike speech for voice agents.
Mistral AI Blog
Anyone who has any common sense knows that AI agents in marketing just don’t exist.
Dev.to
How to Use MiMo V2 API for Free in 2026: Complete Guide
Dev.to
The Agent Memory Problem Nobody Solves: A Practical Architecture for Persistent Context
Dev.to
From Chaos to Compliance: AI Automation for the Mobile Kitchen
Dev.to