Personalized Digital Health Modeling with Adaptive Support Users

arXiv cs.AI / 5/5/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper addresses why digital health personalization is difficult—user data is often scarce and noisy, leading existing approaches to biased transfer and weak generalization.
  • It proposes a unified framework that personalizes a user model by adaptively weighting support users, leveraging both similar users for transfer and dissimilar users via contrastive regularization to avoid misleading correlations.
  • An iterative optimization method jointly updates the model parameters and the similarity weights assigned to other users.
  • Experiments across six tasks on four real-world digital health datasets show consistent gains over population-level and existing personalized baselines, including up to 10% lower RMSE on large datasets and about 25% lower RMSE in low-data scenarios.
  • The learned adaptive weights are intended to improve data efficiency and offer interpretable guidance for selecting targeted data sources for each user.

Abstract

Personalized models are essential in digital health because individuals exhibit substantial physiological and behavioral heterogeneity. Yet personalization is limited by scarce and noisy user-specific data. Most existing methods rely on population pretraining or data from similar users only, which can lead to biased transfer and weak generalization. We propose a unified personalization framework that trains a personal model using adaptively weighted support users, including both similar and dissimilar individuals. The objective integrates personal loss, similarity-weighted transfer from similar users, and contrastive regularization from dissimilar users to suppress misleading correlations. An iterative optimization algorithm jointly updates model parameters and user similarity weights. Experiments on six tasks across four real-world digital health datasets show consistent improvements over population and personalized baselines. The method achieves up to 10% lower RMSE on large-scale datasets and approximately 25% lower RMSE in low-data settings. The learned adaptive weights improve data efficiency and provide interpretable guidance for targeted data selection.