Probing for Reading Times

arXiv cs.CL / 4/22/2026

📰 NewsModels & Research

Key Points

  • The paper investigates whether language model internal representations contain cognitive signals that correlate with human reading times using eye-tracking data across five languages.
  • Regularized linear regression probes each model layer against several scalar predictors, including surprisal, information value, and logit-lens surprisal.
  • Results show early-layer representations predict early eye-tracking measures (e.g., first fixation and gaze duration) better than surprisal, suggesting low-level lexical/structural information aligns with early human processing.
  • For later reading-time measures (e.g., total reading time), surprisal remains the strongest predictor despite being more compressed, indicating different mechanisms across reading stages.
  • The best predictor varies by language and eye-tracking metric, and combining surprisal with early-layer representations improves performance.

Abstract

Probing has shown that language model representations encode rich linguistic information, but it remains unclear whether they also capture cognitive signals about human processing. In this work, we probe language model representations for human reading times. Using regularized linear regression on two eye-tracking corpora spanning five languages (English, Greek, Hebrew, Russian, and Turkish), we compare the representations from every model layer against scalar predictors -- surprisal, information value, and logit-lens surprisal. We find that the representations from early layers outperform surprisal in predicting early-pass measures such as first fixation and gaze duration. The concentration of predictive power in the early layers suggests that human-like processing signatures are captured by low-level structural or lexical representations, pointing to a functional alignment between model depth and the temporal stages of human reading. In contrast, for late-pass measures such as total reading time, scalar surprisal remains superior, despite its being a much more compressed representation. We also observe performance gains when using both surprisal and early-layer representations. Overall, we find that the best-performing predictor varies strongly depending on the language and eye-tracking measure.