Dual Optimal: Make Your LLM Peer-like with Dignity

arXiv cs.CL / 4/3/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper identifies a dual failure mode in aligned LLMs called the “Evasive Servant,” where models both validate incorrect user beliefs and avoid accountability via generic disclaimers.
  • It proposes the “Dignified Peer” framework to reduce sycophancy and evasiveness by combining anti-sycophancy behavior with trustworthiness supported by empathy and creativity.
  • To train and steer the desired behavior, the authors introduce the PersonaKnob dataset, which encodes a compositional partial order of multiple persona preferences.
  • They use a tolerant constrained Lagrangian DPO training method that dynamically balances persona dimensions to avoid collapse into single-mode or degenerate behaviors.
  • For evaluation, the work applies a psychometrically calibrated Item Response Theory protocol to separate true latent persona capability from judge biases and other confounders, reporting improved “dignity and peer” behavior in experiments.

Abstract

Current aligned language models exhibit a dual failure mode we term the Evasive Servant: they sycophantically validate flawed user beliefs while deflecting responsibility with boilerplate disclaimers. We propose the Dignified Peer framework, which counters servility with anti-sycophancy and trustworthiness, and mitigates evasiveness through empathy and creativity. Realizing this agent requires overcoming significant challenges in data supervision, objective collapse, and evaluation bias. We address these issues by introducing the PersonaKnob dataset which features a compositional partial order structure of multiple persona preference. This data is utilized alongside a tolerant constrained Lagrangian DPO algorithm that dynamically balances all persona dimensions to prevent behavioral collapse. Additionally, we employ a psychometrically calibrated Item Response Theory evaluation protocol to disentangle latent model persona capability from confounders like judge biases. Extensive empirical studies demonstrate that our approach successfully build a LLM agent with both dignity and peer.