Beneath the Surface: Investigating LLMs' Capabilities for Communicating with Subtext

arXiv cs.CL / 4/8/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether LLMs can generate and interpret subtext (implied meaning beyond literal wording) and argues that current models often struggle with this socially grounded aspect of communication.
  • It introduces four new evaluation suites, including allegory writing/interpretation and multi-agent or multimodal game settings inspired by board games, to measure subtext capabilities more systematically.
  • The findings show frontier models have a strong tendency toward overly literal, explicit communication, frequently producing literal clues (reported as 60% in one environment, Visual Allusions).
  • Some models can sometimes reduce literalness by leveraging shared common ground with a counterpart, yielding a 30–50% reduction in overly literal clues, but they struggle when that common ground is not explicitly stated.
  • For allegory understanding, the authors find that conditions such as paratext and persona cues can significantly change how subtext is interpreted, highlighting sensitivity to context framing.

Abstract

Human communication is fundamentally creative, and often makes use of subtext -- implied meaning that goes beyond the literal content of the text. Here, we systematically study whether language models can use subtext in communicative settings, and introduce four new evaluation suites to assess these capabilities. Our evaluation settings range from writing & interpreting allegories to playing multi-agent and multi-modal games inspired by the rules of board games like Dixit. We find that frontier models generally exhibit a strong bias towards overly literal, explicit communication, and thereby fail to account for nuanced constraints -- even the best performing models generate literal clues 60% of times in one of our environments -- Visual Allusions. However, we find that some models can sometimes make use of common ground with another party to help them communicate with subtext, achieving 30%-50% reduction in overly literal clues; but they struggle at inferring presence of a common ground when not explicitly stated. For allegory understanding, we find paratextual and persona conditions to significantly shift the interpretation of subtext. Overall, our work provides quantifiable measures for an inherently complex and subjective phenomenon like subtext and reveals many weaknesses and idiosyncrasies of current LLMs. We hope this research to inspire future work towards socially grounded creative communication and reasoning.