The Efficiency Attenuation Phenomenon: A Computational Challenge to the Language of Thought Hypothesis

arXiv cs.AI / 3/25/2026

💬 OpinionSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper computationally tests the Language of Thought (LoT) hypothesis by asking whether agent cognition requires language-like, symbolic representations.
  • It proposes the “AI Private Language” thought experiment and an “Efficiency Attenuation Phenomenon (EAP)” prediction: emergent, inscrutable protocols should outperform forced human-comprehensible languages.
  • Using a cooperative navigation task under partial observability, the authors find emergent-protocol agents achieve 50.5% higher efficiency than agents constrained to a predefined symbolic protocol.
  • The results are interpreted as evidence that optimal collaborative cognition may be driven by sub-symbolic computations rather than mediated by symbolic structures, motivating pluralism in cognitive architectures.
  • The work also connects to AI ethics by implying that in-agent communication and cognition may become non-human-readable, raising considerations for transparency and control.

Abstract

This paper computationally investigates whether thought requires a language-like format, as posited by the Language of Thought (LoT) hypothesis. We introduce the ``AI Private Language'' thought experiment: if two artificial agents develop an efficient, inscrutable communication protocol via multi-agent reinforcement learning (MARL), and their performance declines when forced to use a human-comprehensible language, this Efficiency Attenuation Phenomenon (EAP) challenges the LoT. We formalize this in a cooperative navigation task under partial observability. Results show that agents with an emergent protocol achieve 50.5\% higher efficiency than those using a pre-defined, human-like symbolic protocol, confirming the EAP. This suggests optimal collaborative cognition in these systems is not mediated by symbolic structures but is naturally coupled with sub-symbolic computations. The work bridges philosophy, cognitive science, and AI, arguing for pluralism in cognitive architectures and highlighting implications for AI ethics.