Comparing Human and Large Language Model Interpretation of Implicit Information

arXiv cs.CL / 4/21/2026

📰 NewsModels & Research

Key Points

  • The study examines whether human approaches to interpreting implicit meanings transfer to interactions with large language models (LLMs).
  • It introduces a new task, Implicit Information Extraction (IIE), and an LLM-based pipeline that constructs a structured knowledge graph via relational triplets, validation of implicit inferences, and temporal relation analysis.
  • Experiments compare two LLMs with crowdsourced human judgments across two datasets, finding that while humans often agree with model triplets, humans also suggest many additional relations.
  • The results suggest LLMs may have limited coverage of implicit information and behave more conservatively than humans in socially rich contexts, while human conservatism increases in shorter, fact-focused contexts.
  • The authors provide open-source code for the proposed IIE pipeline on GitHub.

Abstract

The interpretation of implicit meanings is an integral aspect of human communication. However, this framework may not transfer to interactions with Large Language Models (LLMs). To investigate this, we introduce the task of Implicit Information Extraction (IIE) and propose an LLM-based IIE pipeline that builds a structured knowledge graph from a context sentence by extracting relational triplets, validating implicit inferences, and analyzing temporal relations. We evaluate two LLMs against crowdsourced human judgments on two datasets. We find that humans agree with most model triplets yet consistently propose many additions, indicating limited coverage in current LLM-based IIE. Moreover, in our experiments, models appear to be more conservative about implicit inferences than humans in socially rich contexts, whereas humans become more conservative in shorter, fact-oriented contexts. Our code is available at https://github.com/Antonio-Dee/IIE_from_LLM.