LLMs Can Infer Political Alignment from Online Conversations
arXiv cs.CL / 3/13/2026
💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research
Key Points
- LLMs can reliably infer hidden political alignment from online discussions, outperforming traditional machine learning models on Debate.org and Reddit.
- Prediction accuracy improves when aggregating multiple text-level inferences to the user level and when using more politics-adjacent domains.
- LLMs leverage words that are highly predictive of political alignment while not being explicitly political, highlighting privacy risks in online data and AI capabilities.
- The findings underscore the capacity and risks of LLMs for exploiting socio-cultural correlates, implying potential misuse and the need for privacy safeguards.
- The work demonstrates a fundamental privacy risk as increasing data exposure and rapid AI progress raise the misuse potential of such capabilities.




