Prototype-Aligned Federated Soft-Prompts for Continual Web Personalization

arXiv cs.LG / 3/31/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes ProtoFed-SP, a privacy-conscious, parameter-efficient prompting framework for continual web personalization under non-stationary user behavior and privacy constraints.
  • ProtoFed-SP uses a dual-timescale design: a fast, sparse short-term soft prompt for session intent and a slow long-term soft prompt anchored to a server-side prototype library.
  • Long-term prototypes are updated via differentially private federated aggregation, and user queries are routed to the top relevant prototypes to compose personalized prompts on the fly.
  • Experiments on eight benchmarks show improved ranking and engagement metrics (e.g., NDCG@10 +2.9% and HR@10 +2.0% over the strongest baselines) alongside reduced forgetting while maintaining accuracy within practical DP budgets.
  • The authors frame the approach as a controllable way to balance stability (retaining long-term preferences) and plasticity (adapting to new intents) using a transparent prompting interface anchored to semantic prototypes.

Abstract

Continual web personalization is essential for engagement, yet real-world non-stationarity and privacy constraints make it hard to adapt quickly without forgetting long-term preferences. We target this gap by seeking a privacy-conscious, parameter-efficient interface that controls stability-plasticity at the user/session level while tying user memory to a shared semantic prior. We propose ProtoFed-SP, a prompt-based framework that injects dual-timescale soft prompts into a frozen backbone: a fast, sparse short-term prompt tracks session intent, while a slow long-term prompt is anchored to a small server-side prototype library that is continually refreshed via differentially private federated aggregation. Queries are routed to Top-M prototypes to compose a personalized prompt. Across eight benchmarks, ProtoFed-SP improves NDCG@10 by +2.9% and HR@10 by +2.0% over the strongest baselines, with notable gains on Amazon-Books (+5.0% NDCG vs. INFER), H&M (+2.5% vs. Dual-LoRA), and Taobao (+2.2% vs. FedRAP). It also lowers forgetting (AF) and Steps-to-95% and preserves accuracy under practical DP budgets. Our contribution is a unifying, privacy-aware prompting interface with prototype anchoring that delivers robust continual personalization and offers a transparent, controllable mechanism to balance stability and plasticity in deployment.