Commonsense Knowledge with Negation: A Resource to Enhance Negation Understanding

arXiv cs.CL / 4/23/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that although negation is fundamental in natural language semantics, LLMs often perform poorly on tasks that require understanding negation.
  • It highlights a gap in prior work: commonsense knowledge resources have been studied extensively, but typically without explicitly incorporating negation.
  • The authors propose an automatic method to augment existing commonsense knowledge corpora with negation, producing two new datasets with over 2 million triples expressed as if-then relations.
  • They report that pre-training LLMs on these negation-enhanced corpora improves the models’ ability to understand negation.

Abstract

Negation is a common and important semantic feature in natural language, yet Large Language Models (LLMs) struggle when negation is involved in natural language understanding tasks. Commonsense knowledge, on the other hand, despite being a well-studied topic, lacks investigations involving negation. In this work, we show that commonsense knowledge with negation is challenging for models to understand. We present a novel approach to automatically augment existing commonsense knowledge corpora with negation, yielding two new corpora containing over 2M triples with if-then relations. In addition, pre-training LLMs on our corpora benefits negation understanding.