Plausibility as Commonsense Reasoning: Humans Succeed, Large Language Models Do not

arXiv cs.CL / 4/7/2026

📰 NewsSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The study investigates whether large language models use plausibility-based “commonsense reasoning” in a human-like, structure-sensitive way when resolving Turkish prenominal relative-clause attachment ambiguities.
  • Humans in a speeded forced-choice experiment show a strong, correctly directed effect where event plausibility systematically shifts attachment preference toward High Attachment vs. Low Attachment.
  • The researchers evaluate multiple Turkish and multilingual LLMs using matched High-Attachment/Low-Attachment continuations compared via mean per-token log-probabilities.
  • Across the tested models, plausibility-driven preference shifts are weak, unstable, or even reversed compared with human judgments.
  • The paper concludes that, for this diagnostic, plausibility information does not reliably guide LLM attachment decisions as it does for humans and argues that Turkish relative-clause attachment is a valuable cross-linguistic benchmark beyond generic language-task scores.

Abstract

Large language models achieve strong performance on many language tasks, yet it remains unclear whether they integrate world knowledge with syntactic structure in a human-like, structure-sensitive way during ambiguity resolution. We test this question in Turkish prenominal relative-clause attachment ambiguities, where the same surface string permits high attachment (HA) or low attachment (LA). We construct ambiguous items that keep the syntactic configuration fixed and ensure both parses remain pragmatically possible, while graded event plausibility selectively favors High Attachment vs.\ Low Attachment. The contrasts are validated with independent norming ratings. In a speeded forced-choice comprehension experiment, humans show a large, correctly directed plausibility effect. We then evaluate Turkish and multilingual LLMs in a parallel preference-based setup that compares matched HA/LA continuations via mean per-token log-probability. Across models, plausibility-driven shifts are weak, unstable, or reversed. The results suggest that, in the tested models, plausibility information does not guide attachment preferences as reliably as it does in human judgments, and they highlight Turkish RC attachment as a useful cross-linguistic diagnostic beyond broad benchmarks.