Discrete Prototypical Memories for Federated Time Series Foundation Models
arXiv cs.LG / 4/7/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- The paper proposes FeDPM, a federated learning framework that turns time-series modeling into a discrete latent-space problem using prototypical memories.
- It targets two key gaps in existing federated time-series foundation models: semantic mismatch between text-centric LLM latent spaces and time-series data, and the tendency of parameter-sharing FL to force heterogeneous domains into an overly continuous shared space.
- FeDPM learns domain-local prototypical memory priors, then aligns cross-domain memories to obtain a unified discrete representation across regimes.
- It adds a domain-specific memory update mechanism to balance shared knowledge with personalized, domain-specific prototypical information.
- The authors report that extensive experiments validate the efficiency and effectiveness of FeDPM and provide publicly available code for replication.
Related Articles

Black Hat Asia
AI Business

Amazon CEO takes aim at Nvidia, Intel, Starlink, more in annual shareholder letter
TechCrunch

Why Anthropic’s new model has cybersecurity experts rattled
Reddit r/artificial
Does the AI 2027 paper still hold any legitimacy?
Reddit r/artificial

Why Most Productivity Systems Fail (And What to Do Instead)
Dev.to