To Words and Beyond: Probing Large Language Models for Sentence-Level Psycholinguistic Norms of Memorability and Reading Times
arXiv cs.CL / 3/13/2026
📰 NewsIdeas & Deep AnalysisModels & Research
Key Points
- The paper investigates whether Large Language Models can estimate sentence-level psycholinguistic norms such as memorability and reading times, extending prior work on word-level norms.
- It finds that zero-shot and few-shot prompting are inconsistently predictive, while supervised fine-tuning can align LLM outputs with human norms for sentence-level features.
- With fine-tuning, models can provide estimates that correlate with human data and even surpass simple, interpretable baseline predictors for sentence-level measures.
- The study cautions that relying on LLM prompts as cognitive proxies requires care due to variable performance across prompts and settings.
Related Articles

The programming passion is melting
Dev.to

Maximize Developer Revenue with Monetzly's Innovative API for AI Conversations
Dev.to
Co-Activation Pattern Detection for Prompt Injection: A Mechanistic Interpretability Approach Using Sparse Autoencoders
Reddit r/LocalLLaMA

How to Train Custom Language Models: Fine-Tuning vs Training From Scratch (2026)
Dev.to

KoboldCpp 1.110 - 3 YR Anniversary Edition, native music gen, qwen3tts voice cloning and more
Reddit r/LocalLLaMA