What Is The Political Content in LLMs' Pre- and Post-Training Data?

arXiv cs.CL / 4/6/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates how political bias in LLMs may emerge from the political composition of training data, framing research around leaning distribution, data imbalance, cross-dataset similarity, and alignment between data and model objectives.
  • Using sampling, political-leaning classification, and stance detection, it finds that pre-training corpora are systematically skewed toward left-leaning content and contain substantially more politically engaged material than post-training data.
  • The study reports a strong correlation between the political stances detected in training data and the models’ policy-stance behavior, suggesting data composition directly shapes downstream outputs.
  • It finds that political biases are already present in base models and persist across post-training stages, even when different curation strategies are used for datasets.
  • Overall, the results emphasize the need for greater data transparency as a foundation for more effective bias mitigation strategies.

Abstract

Large language models (LLMs) are known to generate politically biased text. Yet, it remains unclear how such biases arise, making it difficult to design effective mitigation strategies. We hypothesize that these biases are rooted in the composition of training data. Taking a data-centric perspective, we formulate research questions on (1) political leaning present in data, (2) data imbalance, (3) cross-dataset similarity, and (4) data-model alignment. We then examine how exposure to political content relates to models' stances on policy issues. We analyze the political content of pre- and post-training datasets of open-source LLMs, combining large-scale sampling, political-leaning classification, and stance detection. We find that training data is systematically skewed toward left-leaning content, with pre-training corpora containing substantially more politically engaged material than post-training data. We further observe a strong correlation between political stances in training data and model behavior, and show that pre-training datasets exhibit similar political distributions despite different curation strategies. In addition, we find that political biases are already present in base models and persist across post-training stages. These findings highlight the central role of data composition in shaping model behavior and motivate the need for greater data transparency.