A Systematic Comparison of Prompting and Multi-Agent Methods for LLM-based Stance Detection

arXiv cs.CL / 4/30/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper presents a systematic, fair comparison of five LLM-based approaches to stance detection, covering both prompt-based inference (Direct Prompting, Auto-CoT, StSQA) and multi-agent debate methods (COLA, MPRF).
  • Experiments across four datasets and 14 subtasks using 15 LLMs (7B–72B+ parameters) show that the best prompt-based method beats the best agent-based method on models with complete results.
  • Multi-agent debate approaches also substantially increase cost, requiring about 7 to 12 times more API calls per sample than the best prompt-based approach.
  • Model size matters more than method choice, with performance gains leveling off around 32B parameters, and reasoning-focused models like DeepSeek-R1 not consistently outperform general models of similar size.
  • Overall, the study suggests that for LLM-based stance detection, scaling the model is the dominant factor and simpler prompting can be more effective and efficient than multi-agent setups.

Abstract

Stance detection identifies the attitude of a text author toward a given target. Recent studies have explored various LLM-based strategies for this task, from zero-shot prompting to multi-agent debate. However, existing works differ in data splits, base models, and evaluation protocols, making fair comparison difficult. We conduct a systematic comparison that evaluates five methods across two categories -- prompt-based inference (Direct Prompting, Auto-CoT, StSQA) and agent-based debate (COLA, MPRF) -- on four datasets with 14 subtasks, using 15 LLMs from six model families with parameter sizes from 7B to 72B+. Our experiments yield several findings. First, on all models with complete results, the best prompt-based method outperforms the best agent-based method, while agent methods require 7 to 12 times more API calls per sample. Second, model scale has a larger impact on performance than method choice, with gains plateauing around 32B. Third, reasoning-enhanced models (DeepSeek-R1) do not consistently outperform general models of the same size on this task.