Rhetorical Questions in LLM Representations: A Linear Probing Study

arXiv cs.CL / 4/16/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates how large language models encode rhetorical questions versus information-seeking questions by applying linear probing to two social-media datasets with different discourse contexts.
  • It finds that rhetorical signals appear early in the model representations and are most consistently captured by last-token features, with rhetorical questions becoming linearly separable from information-seeking ones within each dataset.
  • Cross-dataset transfer shows rhetorical question detection remains feasible, achieving AUROC around 0.7–0.8, indicating that useful signals generalize to some extent.
  • Despite moderate transfer performance, the paper shows that “transferability” does not mean a single shared representation: probes trained on different datasets yield very different rankings on the same target corpus.
  • Qualitative analysis attributes these probe divergences to multiple underlying rhetorical phenomena, including discourse-level stance across extended argumentation and more localized, syntax-driven interrogative cues.

Abstract

Rhetorical questions are asked not to seek information but to persuade or signal stance. How large language models internally represent them remains unclear. We analyze rhetorical questions in LLM representations using linear probes on two social-media datasets with different discourse contexts, and find that rhetorical signals emerge early and are most stably captured by last-token representations. Rhetorical questions are linearly separable from information-seeking questions within datasets, and remain detectable under cross-dataset transfer, reaching AUROC around 0.7-0.8. However, we demonstrate that transferability does not simply imply a shared representation. Probes trained on different datasets produce different rankings when applied to the same target corpus, with overlap among the top-ranked instances often below 0.2. Qualitative analysis shows that these divergences correspond to distinct rhetorical phenomena: some probes capture discourse-level rhetorical stance embedded in extended argumentation, while others emphasize localized, syntax-driven interrogative acts. Together, these findings suggest that rhetorical questions in LLM representations are encoded by multiple linear directions emphasizing different cues, rather than a single shared direction.