Can we generate portable representations for clinical time series data using LLMs?

arXiv cs.LG / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether frozen LLMs can generate portable patient embeddings from irregular ICU time series so that models trained in one hospital generalize to others with minimal retraining.
  • The method converts ICU time series into concise natural-language summaries using a frozen LLM, then embeds those summaries with a frozen text embedding model to produce fixed-length vectors for downstream forecasting and classification.
  • Experiments across MIMIC-IV, HIRID, and PPICU show competitive performance versus several common in-hospital or representation-learning baselines, with smaller relative drops under cross-hospital transfer.
  • Structured prompt design is found to be important for reducing variance in downstream predictive performance, improving robustness without changing mean accuracy.
  • The resulting portable representations improve few-shot learning while not increasing demographic recoverability of age/sex, suggesting limited additional privacy risk.

Abstract

Deploying clinical ML is slow and brittle: models that work at one hospital often degrade under distribution shifts at the next. In this work, we study a simple question -- can large language models (LLMs) create portable patient embeddings i.e. representations of patients enable a downstream predictor built on one hospital to be used elsewhere with minimal-to-no retraining and fine-tuning. To do so, we map from irregular ICU time series onto concise natural language summaries using a frozen LLM, then embed each summary with a frozen text embedding model to obtain a fixed length vector capable of serving as input to a variety of downstream predictors. Across three cohorts (MIMIC-IV, HIRID, PPICU), on multiple clinically grounded forecasting and classification tasks, we find that our approach is simple, easy to use and competitive with in-distribution with grid imputation, self-supervised representation learning, and time series foundation models, while exhibiting smaller relative performance drops when transferring to new hospitals. We study the variation in performance across prompt design, with structured prompts being crucial to reducing the variance of the predictive models without altering mean accuracy. We find that using these portable representations improves few-shot learning and does not increase demographic recoverability of age or sex relative to baselines, suggesting little additional privacy risk. Our work points to the potential that LLMs hold as tools to enable the scalable deployment of production grade predictive models by reducing the engineering overhead.