A Dual-Task Paradigm to Investigate Sentence Comprehension Strategies in Language Models

arXiv cs.CL / 4/30/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper introduces a dual-task paradigm to study how language models allocate limited working-memory resources between sentence comprehension and an arithmetic computation task.
  • In experiments, models including GPT-4o, o3-mini, and o4-mini change their comprehension behavior under dual-task conditions toward plausibility-based, human-like rational inference.
  • The key evidence is a larger accuracy difference between plausible and implausible sentences (e.g., “bartender blended the cocktail” vs. the reversed roles) when dual-task constraints are applied.
  • The results imply that balancing memory storage and sentence processing constraints can drive more rational inference in LMs, aligning with theories of human sentence comprehension.

Abstract

Language models (LMs) behave more like humans when their cognitive resources are restricted, particularly in predicting sentence processing costs such as reading times. However, it remains unclear whether such constraints similarly affect sentence comprehension strategies. Besides, existing methods do not directly target the balance between memory storage and sentence processing, which is central to human working memory. To address this issue, we propose a dual-task paradigm that combines an arithmetic computation task with a sentence comprehension task, such as "The 2 cocktail + blended 3 =..." Our experiments show that under dual-task conditions, GPT-4o, o3-mini, and o4-mini shift toward plausibility-based comprehension, mirroring humans' rational inference. Specifically, these models show a greater accuracy gap between plausible sentences (e.g., "The cocktail was blended by the bartender") and implausible sentences (e.g., "The bartender was blended by the cocktail") in the dual-task condition compared to the single-task conditions. These findings suggest that constraints on the balance between memory and processing resources promote rational inference in LMs. More broadly, they support the view that human-like sentence comprehension fundamentally arises from the allocation of limited cognitive resources.