Pressure, What Pressure? Sycophancy Disentanglement in Language Models via Reward Decomposition

arXiv cs.AI / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies that standard alignment approaches struggle with sycophancy because a single scalar reward blends two failure modes: pressure capitulation and evidence blindness.
  • It formalizes “pressure independence” and “evidence responsiveness” to provide a framework for disentangled training of sycophancy behaviors.
  • The authors propose a reward decomposition method using a multi-component GRPO objective with five terms covering pressure resistance, context fidelity, position consistency, agreement suppression, and factual correctness.
  • Experiments across five base models and multiple authority/evidence conditions show consistent reductions in sycophancy across all evaluated metric axes, with ablations indicating the reward terms each control distinct behavioral dimensions.
  • A learned resistance to pressured prompting generalizes beyond the training setup, improving performance on SycophancyEval by up to 17 points even when pressured-form examples are absent from training.

Abstract

Large language models exhibit sycophancy, the tendency to shift their stated positions toward perceived user preferences or authority cues regardless of evidence. Standard alignment methods fail to correct this because scalar reward models conflate two distinct failure modes into a single signal: pressure capitulation, where the model changes a correct answer under social pressure, and evidence blindness, where the model ignores the provided context entirely. We operationalise sycophancy through formal definitions of pressure independence and evidence responsiveness, serving as a working framework for disentangled training rather than a definitive characterisation of the phenomenon. We propose the first approach to sycophancy reduction via reward decomposition, introducing a multi-component Group Relative Policy Optimisation (GRPO) reward that decomposes the training signal into five terms: pressure resistance, context fidelity, position consistency, agreement suppression, and factual correctness. We train using a contrastive dataset pairing pressure-free baselines with pressured variants across three authority levels and two opposing evidence contexts. Across five base models, our two-phase pipeline consistently reduces sycophancy on all metric axes, with ablations confirming that each reward term governs an independent behavioural dimension. The learned resistance to pressure generalises beyond our training methodology and prompt structure, reducing answer-priming sycophancy by up to 17 points on SycophancyEval despite the absence of such pressure forms during training.

Pressure, What Pressure? Sycophancy Disentanglement in Language Models via Reward Decomposition | AI Navigate