AI Navigate

To Words and Beyond: Probing Large Language Models for Sentence-Level Psycholinguistic Norms of Memorability and Reading Times

arXiv cs.CL / 3/13/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper investigates whether Large Language Models can estimate sentence-level psycholinguistic norms such as memorability and reading times, extending prior work on word-level norms.
  • It finds that zero-shot and few-shot prompting are inconsistently predictive, while supervised fine-tuning can align LLM outputs with human norms for sentence-level features.
  • With fine-tuning, models can provide estimates that correlate with human data and even surpass simple, interpretable baseline predictors for sentence-level measures.
  • The study cautions that relying on LLM prompts as cognitive proxies requires care due to variable performance across prompts and settings.

Abstract

Large Language Models (LLMs) have recently been shown to produce estimates of psycholinguistic norms, such as valence, arousal, or concreteness, for words and multiword expressions, that correlate with human judgments. These estimates are obtained by prompting an LLM, in zero-shot fashion, with a question similar to those used in human studies. Meanwhile, for other norms such as lexical decision time or age of acquisition, LLMs require supervised fine-tuning to obtain results that align with ground-truth values. In this paper, we extend this approach to the previously unstudied features of sentence memorability and reading times, which involve the relationship between multiple words in a sentence-level context. Our results show that via fine-tuning, models can provide estimates that correlate with human-derived norms and exceed the predictive power of interpretable baseline predictors, demonstrating that LLMs contain useful information about sentence-level features. At the same time, our results show very mixed zero-shot and few-shot performance, providing further evidence that care is needed when using LLM-prompting as a proxy for human cognitive measures.