Modeling Multi-Dimensional Cognitive States in Large Language Models under Cognitive Crowding

arXiv cs.CL / 4/21/2026

📰 NewsIdeas & Deep AnalysisModels & Research

Key Points

  • The paper argues that existing LLM evaluation mainly focuses on single cognitive dimensions (e.g., emotion or stance) and misses interactions among multiple psychological dimensions such as emotion, thinking style, stance, and intention.
  • It introduces CognitiveBench, a new benchmark with unified annotations across four cognitive dimensions, and shows that LLMs’ accuracy drops sharply when modeling these dimensions jointly.
  • Using Gromov δ-hyperbolicity analysis, the authors find CognitiveBench has a strong hierarchical structure, which they connect to performance limitations via a phenomenon they call “Cognitive Crowding.”
  • They propose HyCoLLM, which represents cognitive states in hyperbolic space and uses Hyperbolic Guided Alignment Tuning to better align LLM representations, resulting in substantial improvements in multi-dimensional cognitive understanding, including strong results from an 8B model.

Abstract

Modeling human cognitive states is essential for advanced artificial intelligence. Existing Large Language Models (LLMs) mainly address isolated tasks such as emotion analysis or stance detection, and fail to capture interactions among cognitive dimensions defined in psychology, including emotion, thinking style, stance, and intention. To bridge this gap, we construct CognitiveBench, the first benchmark with unified annotations across the above four dimensions. Experiments on CognitiveBench show that although LLMs perform well on single dimension tasks, their performance drops sharply in joint multi-dimensional modeling. Using Gromov \delta-hyperbolicity analysis, we find that CognitiveBench exhibits a strong hierarchical structure. We attribute the performance bottleneck to ``Cognitive Crowding'', where hierarchical cognitive states require exponential representational space, while the Euclidean space of LLMs grows only polynomially, causing representation overlap and degraded performance. To address this mismatch, we propose HyCoLLM, which models cognitive states in hyperbolic space and aligns LLM representations via Hyperbolic Guided Alignment Tuning. Results show that HyCoLLM substantially improves multi-dimensional cognitive understanding, allowing 8B parameter model to outperform strong baselines, including GPT-4o.