Learning with Incomplete Context: Linear Contextual Bandits with Pretrained Imputation
arXiv stat.ML / 4/3/2026
💬 OpinionIdeas & Deep AnalysisModels & Research
Key Points
- The paper examines online linear contextual bandits when the context is only partially observed and potentially complex or nonstationary, using a second fully observed (auxiliary) dataset collected without adaptive interventions.
- It proposes PULSE-UCB, which uses pretrained models trained on the auxiliary data to impute missing context features during online decision-making.
- The authors derive regret bounds that split into a standard contextual bandit regret term plus an additional term reflecting the quality (uncertainty/accuracy) of the pretrained imputations.
- In the i.i.d. setting with Hölder-smooth missing features, PULSE-UCB is shown to reach near-optimal performance with matching lower bounds, clarifying when pretrained imputation is most beneficial.
- The results provide quantitative guidance on how errors in predicted contexts degrade decision quality and how much historical auxiliary data is needed to improve downstream learning.
Related Articles

90000 Tech Workers Got Fired This Year and Everyone Is Blaming AI but Thats Not the Whole Story
Dev.to

Microsoft’s $10 Billion Japan Bet Shows the Next AI Battleground Is National Infrastructure
Dev.to

TII Releases Falcon Perception: A 0.6B-Parameter Early-Fusion Transformer for Open-Vocabulary Grounding and Segmentation from Natural Language Prompts
MarkTechPost

The house asked me a question
Dev.to

Precision Clip Selection: How AI Suggests Your In and Out Points
Dev.to