AI Navigate

Learning When to Trust in Contextual Bandits

arXiv cs.AI / 3/17/2026

📰 NewsModels & Research

Key Points

  • The paper challenges the assumption that feedback sources are either globally trustworthy or globally adversarial by introducing Contextual Sycophancy, where evaluators are truthful in benign contexts but biased in critical ones.
  • It shows that standard robust reinforcement learning methods fail in this setting due to Contextual Objective Decoupling.
  • It proposes CESA-LinUCB, which learns a high-dimensional Trust Boundary for each evaluator to adaptively weigh feedback.
  • It proves sublinear regret (tilde O(sqrt T)) against contextual adversaries and shows ground-truth recovery even when no evaluator is globally reliable.

Abstract

Standard approaches to Robust Reinforcement Learning assume that feedback sources are either globally trustworthy or globally adversarial. In this paper, we challenge this assumption and we identify a more subtle failure mode. We term this mode as Contextual Sycophancy, where evaluators are truthful in benign contexts but strategically biased in critical ones. We prove that standard robust methods fail in this setting, suffering from Contextual Objective Decoupling. To address this, we propose CESA-LinUCB, which learns a high-dimensional Trust Boundary for each evaluator. We prove that CESA-LinUCB achieves sublinear regret \tilde{O}(\sqrt{T}) against contextual adversaries, recovering the ground truth even when no evaluator is globally reliable.