Prediction of Item Difficulty for Reading Comprehension Items by Creation of Annotated Item Repository

arXiv cs.CL / 4/1/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper proposes predicting Item Response Theory (IRT) difficulty for reading comprehension items from text content and reported percent-correct (p-value) data.
  • It builds an annotated repository using U.S. standardized test reading passages and student response data across grades 3–8 (2018–2023), enriched with linguistic, passage/test, and context metadata.
  • A penalized regression model using these features achieves RMSE 0.59 versus a baseline RMSE of 0.92, with a 0.77 correlation between true and predicted difficulty.
  • Adding embeddings from LLM-derived models (ModernBERT, BERT, and LLaMA) yields only marginal improvements, and linguistic features alone or LLM embeddings alone can perform similarly to combined approaches.
  • The authors suggest the difficulty prediction model can be used to filter and categorize reading items and plan to release the model publicly for broader stakeholder use.

Abstract

Prediction of item difficulty based on its text content is of substantial interest. In this paper, we focus on the related problem of recovering IRT-based difficulty when the data originally reported item p-value (percent correct responses). We model this item difficulty using a repository of reading passages and student data from US standardized tests from New York and Texas for grades 3-8 spanning the years 2018-23. This repository is annotated with meta-data on (1) linguistic features of the reading items, (2) test features of the passage, and (3) context features. A penalized regression prediction model with all these features can predict item difficulty with RMSE 0.59 compared to baseline RMSE of 0.92, and with a correlation of 0.77 between true and predicted difficulty. We supplement these features with embeddings from LLMs (ModernBERT, BERT, and LlAMA), which marginally improve item difficulty prediction. When models use only item linguistic features or LLM embeddings, prediction performance is similar, which suggests that only one of these feature categories may be required. This item difficulty prediction model can be used to filter and categorize reading items and will be made publicly available for use by other stakeholders.