Grading the Unspoken: Evaluating Tacit Reasoning in Quantum Field Theory and String Theory with LLMs

arXiv cs.CL / 4/17/2026

💬 OpinionModels & Research

Key Points

  • The study examines whether LLMs can meaningfully support research in highly abstract fields like quantum field theory and string theory, where correctness is tacit, layered, and not strictly binary.
  • It introduces a compact, expert-curated dataset (12 questions) and a five-level grading rubric that evaluates not just final statements, but also key concept awareness, reasoning-chain presence, tacit step reconstruction, and added “enrichment.”
  • Results show that multiple contemporary LLMs perform near ceiling on explicit derivations within stable conceptual setups, but degrade systematically when they must reconstruct omitted reasoning steps.
  • The paper attributes many failures to instability in representation selection, where models often cannot find the correct conceptual framing needed to resolve implicit structural tensions.
  • The authors argue that abstract theoretical physics is a particularly sensitive benchmark for exposing the epistemic limits of current AI evaluation methods.

Abstract

Large language models have demonstrated impressive performance across many domains of mathematics and physics. One natural question is whether such models can support research in highly abstract theoretical fields such as quantum field theory and string theory. Evaluating this possibility faces an immediate challenge: correctness in these domains is layered, tacit, and fundamentally non-binary. Standard answer-matching metrics fail to capture whether intermediate conceptual steps are properly reconstructed or whether implicit structural constraints are respected. We construct a compact expert-curated dataset of twelve questions spanning core areas of quantum field theory and string theory, and introduce a five-level grading rubric separating statement correctness, key concept awareness, reasoning chain presence, tacit step reconstruction, and enrichment. Evaluating multiple contemporary LLMs, we observe near-ceiling performance on explicit derivations within stable conceptual frames, but systematic degradation when tasks require reconstruction of omitted reasoning steps or reorganization of representations under global consistency constraints. These failures are driven not only by missing intermediate steps, but by an instability in representation selection: models often fail to identify the correct conceptual framing required to resolve implicit tensions. We argue that highly abstract theoretical physics provides a uniquely sensitive lens on the epistemic limits of current evaluation paradigms.