Learning with Incomplete Context: Linear Contextual Bandits with Pretrained Imputation

arXiv stat.ML / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper examines online linear contextual bandits when the context is only partially observed and potentially complex or nonstationary, using a second fully observed (auxiliary) dataset collected without adaptive interventions.
  • It proposes PULSE-UCB, which uses pretrained models trained on the auxiliary data to impute missing context features during online decision-making.
  • The authors derive regret bounds that split into a standard contextual bandit regret term plus an additional term reflecting the quality (uncertainty/accuracy) of the pretrained imputations.
  • In the i.i.d. setting with Hölder-smooth missing features, PULSE-UCB is shown to reach near-optimal performance with matching lower bounds, clarifying when pretrained imputation is most beneficial.
  • The results provide quantitative guidance on how errors in predicted contexts degrade decision quality and how much historical auxiliary data is needed to improve downstream learning.

Abstract

The rise of large-scale pretrained models has made it feasible to generate predictive or synthetic features at low cost, raising the question of how to incorporate such surrogate predictions into downstream decision-making. We study this problem in the setting of online linear contextual bandits, where contexts may be complex, nonstationary, and only partially observed. In addition to bandit data, we assume access to an auxiliary dataset containing fully observed contexts--common in practice since such data are collected without adaptive interventions. We propose PULSE-UCB, an algorithm that leverages pretrained models trained on the auxiliary data to impute missing features during online decision-making. We establish regret guarantees that decompose into a standard bandit term plus an additional component reflecting pretrained model quality. In the i.i.d. context case with H\"older-smooth missing features, PULSE-UCB achieves near-optimal performance, supported by matching lower bounds. Our results quantify how uncertainty in predicted contexts affects decision quality and how much historical data is needed to improve downstream learning.