KnowledgeBerg: Evaluating Systematic Knowledge Coverage and Compositional Reasoning in Large Language Models

arXiv cs.AI / 4/21/2026

📰 NewsDeveloper Stack & InfrastructureModels & Research

Key Points

  • The article proposes a framework for evaluating LLMs on two hard-to-notice real-world abilities: systematic coverage of a bounded knowledge universe and compositional set-based reasoning over it.
  • It introduces KnowledgeBerg, a benchmark with 4,800 multiple-choice questions built from 1,183 enumeration seeds across 10 domains and 17 languages, using authoritative sources to keep the universes reproducible.
  • Experiments with representative open-source LLMs show major weaknesses, with low performance on universe enumeration (5.26–36.88 F1) and knowledge-grounded reasoning (16.00–44.19 accuracy).
  • The authors diagnose failures into three stages—completeness (missing knowledge), awareness (not identifying requirements), and application (incorrect reasoning execution)—and find the same pattern across languages and model sizes.
  • While test-time compute and retrieval augmentation provide some improvements, notable gaps remain, suggesting current LLMs struggle to organize structured knowledge and execute compositional reasoning even within bounded domains.

Abstract

Many real-world questions appear deceptively simple yet implicitly demand two capabilities: (i) systematic coverage of a bounded knowledge universe and (ii) compositional set-based reasoning over that universe, a phenomenon we term "the tip of the iceberg." We formalize this challenge through two orthogonal dimensions: knowledge width, the cardinality of the required universe, and reasoning depth, the number of compositional set operations. We introduce KnowledgeBerg, a benchmark of 4,800 multiple-choice questions derived from 1,183 enumeration seeds spanning 10 domains and 17 languages, with universes grounded in authoritative sources to ensure reproducibility. Representative open-source LLMs demonstrate severe limitations, achieving only 5.26-36.88 F1 on universe enumeration and 16.00-44.19 accuracy on knowledge-grounded reasoning. Diagnostic analyses reveal three stages of failure: completeness, or missing knowledge; awareness, or failure to identify requirements; and application, or incorrect reasoning execution. This pattern persists across languages and model scales. Although test-time compute and retrieval augmentation yield measurable gains -- up to 4.35 and 3.78 points, respectively -- substantial gaps remain, exposing limitations in how current LLMs organize structured knowledge and execute compositional reasoning over bounded domains. The dataset is available at https://huggingface.co/datasets/2npc/KnowledgeBerg