Large Language Models Unpack Complex Political Opinions through Target-Stance Extraction

arXiv cs.CL / 3/26/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes Target-Stance Extraction (TSE) as a way for Large Language Models to identify both the political target being discussed and the stance expressed toward that target, going beyond coarse partisan labels.
  • Researchers built a dataset of 1,084 Reddit posts from r/NeutralPolitics spanning 138 political targets to evaluate LLM performance on nuanced, multi-issue political discourse.
  • Experiments across proprietary and open-source LLMs using zero-shot, few-shot, and context-augmented prompting show that top-performing models can match the quality of highly trained human annotators.
  • The approach is reported to be robust even for difficult posts with low inter-annotator agreement, suggesting reliability under ambiguous labeling conditions.
  • Overall, the study positions TSE with LLMs as a scalable method for computational social science and more granular political text analysis with minimal supervision.

Abstract

Political polarization emerges from a complex interplay of beliefs about policies, figures, and issues. However, most computational analyses reduce discourse to coarse partisan labels, overlooking how these beliefs interact. This is especially evident in online political conversations, which are often nuanced and cover a wide range of subjects, making it difficult to automatically identify the target of discussion and the opinion expressed toward them. In this study, we investigate whether Large Language Models (LLMs) can address this challenge through Target-Stance Extraction (TSE), a recent natural language processing task that combines target identification and stance detection, enabling more granular analysis of political opinions. For this, we construct a dataset of 1,084 Reddit posts from r/NeutralPolitics, covering 138 distinct political targets and evaluate a range of proprietary and open-source LLMs using zero-shot, few-shot, and context-augmented prompting strategies. Our results show that the best models perform comparably to highly trained human annotators and remain robust on challenging posts with low inter-annotator agreement. These findings demonstrate that LLMs can extract complex political opinions with minimal supervision, offering a scalable tool for computational social science and political text analysis.