Theory of Mind in Action: The Instruction Inference Task in Dynamic Human-Agent Collaboration

arXiv cs.CL / 4/20/2026

💬 OpinionIdeas & Deep AnalysisModels & Research

Key Points

  • The paper studies how large language model (LLM) agents can infer a human principal’s unspoken intentions when instructions are incomplete or ambiguous, treating this as a “Theory of Mind” (ToM) capability.
  • It introduces a new evaluation benchmark/task called “Instruction Inference,” designed to test ToM in dynamic, goal-oriented human-agent collaboration.
  • The authors propose “Tomcat,” an LLM-based agent with two variants: Fs-CoT (few-shot structured chain-of-thought examples) and CP (commonsense-prompt-based reasoning).
  • Tomcat is implemented on GPT-4o, DeepSeek-R1, and Gemma-3-27B, and is evaluated via a user study with 52 participants using the same information as the CP variant.
  • Results show that Tomcat using Fs-CoT—especially with GPT-4o and DeepSeek-R1—achieves performance comparable to human participants on intent accuracy, action optimality, and planning optimality, suggesting strong ToM potential for teaming.

Abstract

Successful human-agent teaming relies on an agent being able to understand instructions given by a (human) principal. In many cases, an instruction may be incomplete or ambiguous. In such cases, the agent must infer the unspoken intentions from their shared context, that is, it must exercise the principal's Theory of Mind (ToM) and infer the mental states of its principal. We consider the prospects of effective human-agent collaboration using large language models (LLMs). To assess ToM in a dynamic, goal-oriented, and collaborative environment, we introduce a novel task, Instruction Inference, in which an agent assists a principal in reaching a goal by interpreting incomplete or ambiguous instructions. We present Tomcat, an LLM-based agent, designed to exhibit ToM reasoning in interpreting and responding to the principal's instructions. We implemented two variants of Tomcat. One, dubbed Fs-CoT (Fs for few-shot, CoT for chain-of-thought), is based on a small number of examples demonstrating the requisite structured reasoning. One, dubbed CP (commonsense prompt), relies on commonsense knowledge and information about the problem. We realized both variants of Tomcat on three leading LLMs, namely, GPT-4o, DeepSeek-R1, and Gemma-3-27B. To evaluate the effectiveness of Tomcat, we conducted a study with 52 human participants in which we provided participants with the same information as the CP variant. We computed intent accuracy, action optimality, and planning optimality to measure the ToM capabilities of Tomcat and our study participants. We found that Tomcat with Fs-CoT, particularly with GPT-4o and DeepSeek-R1, achieves performance comparable to the human participants, underscoring its ToM potential for human-agent collaboration.