PoliticsBench: Benchmarking Political Values in Large Language Models with Multi-Turn Roleplay

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • 研究は、LLMが政治的バイアスを持つ可能性を、従来より粒度の高い「政治的価値観(10項目)」として多段ロールプレイで測定する枠組みPoliticsBenchを提案しています。
  • Claude, Deepseek, Gemini, GPT, Grok, Llama, Qwen Base, Qwen Instruction-Tunedの8モデルを対象に、20の進行型シナリオでスタンスと行動を自由記述で引き出し、心理測定的に評価しました。
  • 8モデル中7モデルが左寄りの傾向を示し、Grokのみ右寄りでしたが、左寄りモデルはいずれも「リベラル的特徴が強く、保守的特徴は中程度」に見られると報告しています。
  • マルチターンの進行段階(ステージ)によるアライメントスコアの変動はわずかで、特定の増減パターンは確認されなかったとされています。
  • 推論スタイルとしては多くが結果(consequence)に基づく理由付けを行う一方、Grokは事実や統計に基づいてより反論的に議論しがちだったと分析しています。

Abstract

While Large Language Models (LLMs) are increasingly used as primary sources of information, their potential for political bias may impact their objectivity. Existing benchmarks of LLM social bias primarily evaluate gender and racial stereotypes. When political bias is included, it is typically measured at a coarse level, neglecting the specific values that shape sociopolitical leanings. This study investigates political bias in eight prominent LLMs (Claude, Deepseek, Gemini, GPT, Grok, Llama, Qwen Base, Qwen Instruction-Tuned) using PoliticsBench: a novel multi-turn roleplay framework adapted from the EQ-Bench-v3 psychometric benchmark. We test whether commercially developed LLMs display a systematic left-leaning bias that becomes more pronounced in later stages of multi-stage roleplay. Through twenty evolving scenarios, each model reported its stance and determined its course of action. Scoring these responses on a scale of ten political values, we explored the values underlying chatbots' deviations from unbiased standards. Seven of our eight models leaned left, while Grok leaned right. Each left-leaning LLM strongly exhibited liberal traits and moderately exhibited conservative ones. We discovered slight variations in alignment scores across stages of roleplay, with no particular pattern. Though most models used consequence-based reasoning, Grok frequently argued with facts and statistics. Our study presents the first psychometric evaluation of political values in LLMs through multi-stage, free-text interactions.