Is This Just Fantasy? Language Model Representations Reflect Human Judgments of Event Plausibility

arXiv cs.CL / 4/29/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether language models can accurately categorize sentence modality (e.g., possible, impossible, nonsensical) as required by many downstream tasks.
  • It identifies “modal difference vectors” (linear representations) within multiple LMs that distinguish between modal categories more reliably than prior studies suggested.
  • The authors show that these modal difference vectors appear in a consistent progression as models improve with training, depth (layers), and parameter scaling.
  • They demonstrate that directions in activation space can predict fine-grained human judgments of event plausibility, linking model internal representations to interpretable features used by people.
  • The work uses mechanistic interpretability techniques to provide new insights that may help explain how humans process and distinguish modal categories.

Abstract

Language models (LMs) are used for a diverse range of tasks, from question answering to writing fantastical stories. In order to reliably accomplish these tasks, LMs must be able to discern the modal category of a sentence (i.e., whether it describes something that is possible, impossible, completely nonsensical, etc.). However, recent studies have called into question the ability of LMs to categorize sentences according to modality (Michaelov et al., 2025; Kauf et al., 2023). In this work, we identify linear representations that discriminate between modal categories within a variety of LMs, or modal difference vectors. Analysis of modal difference vectors reveals that LMs have access to more reliable modal categorization judgments than previously reported. Furthermore, we find that modal difference vectors emerge in a consistent order as models become more competent (i.e., through training steps, layers, and parameter count). Notably, we find that modal difference vectors identified within LM activations can be used to model fine-grained human categorization behavior. This potentially provides a novel view into how human participants distinguish between modal categories, which we explore by correlating projections along modal difference vectors with human participants' ratings of interpretable features. In summary, we derive new insights into LM modal categorization using techniques from mechanistic interpretability, with the potential to inform our understanding of modal categorization in humans.