To Words and Beyond: Probing Large Language Models for Sentence-Level Psycholinguistic Norms of Memorability and Reading Times
arXiv cs.CL / 3/13/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether Large Language Models can estimate sentence-level psycholinguistic norms such as memorability and reading times, extending prior work on word-level norms.
- It finds that zero-shot and few-shot prompting are inconsistently predictive, while supervised fine-tuning can align LLM outputs with human norms for sentence-level features.
- With fine-tuning, models can provide estimates that correlate with human data and even surpass simple, interpretable baseline predictors for sentence-level measures.
- The study cautions that relying on LLM prompts as cognitive proxies requires care due to variable performance across prompts and settings.
Related Articles

Composer 2: What is new and Compares with Claude Opus 4.6 & GPT-5.4
Dev.to
How UCP Breaks Your E-Commerce Tracking Stack: A Platform-by-Platform Analysis
Dev.to
AI Text Analyzer vs Asking Friends: Which Gives Better Perspective?
Dev.to
[D] Cathie wood claims ai productivity wave is starting, data shows 43% of ceos save 8+ hours weekly
Reddit r/MachineLearning

Microsoft hires top AI researchers from Allen Institute for AI for Suleyman's Superintelligence team
THE DECODER